Since I was a little bored today I started thinking about the Bond Events with their weird 1470 +/- 500 year pseudo cycles. Pseudo cycles drive most folks nuts. To me they are the fun part of the puzzle. One of the theorized drivers of Bond events is the ~1800 year lunar orbital cycle. At 1800 years it could be involved, but the 4,300 year pseudo cycle related to the precessional cycle would seem to have a greater impact on energy. Well, the precessional cycle would also have an impact on tides.
So just for grins I took the Bintanji and Van de Wal Arctic reconstructions and compared their relative sea level (RSL) and Temperature of the deep ocean (Tdo). I have to normalize both over 20,000 years in this case and invert the RSL, since for some reason it is inverted as best I can tell in the download.
Voili La! Your typically nasty looking 4.3 ka pseudo cyclic phenomenon. The variation is small, but since the reconstruction is for "global" sea level it would not indicate how much polar amplification may be associated with that small variation.
So if someone tells you that the Sun or the Moon done it, they may be right, but the mechanism is pretty complex, involving changes in the rate of deep ocean mixing and tides freeing or fixing polar sea ice. Really big tidal events could do things like breaking free the West Antarctic Ice Sheet in chunks here and there, tweaking geomagnetic fields a touch, cycling volcanic activity, all sorts of fun stuff that have nothing to do with changes in atmospheric or solar forcing.
That is pretty neat really, since there are a lot of weird things that correlate but shouldn't really have the arse to cause climate change.
If I can get this laptop to handle larger spread sheets, I might just get back into the paleo stuff for a while.
New Computer Fund
Monday, January 28, 2013
Saturday, January 26, 2013
For Blah Blah Duh
Blah Blah Duh, aka BBD, is a blog denizen that frequents Dr. Curry's blog Climate Etc. Blah Blah Duh, is the individual that promoted me to a Fraud and Buffoon a while back. Since the Christmas truce, BBD has been a bit less volatile, but still has this annoying habit of not thinking.
BBD, incorrectly assumes that in a system that has stored on the order of 10^26 Joules of energy, that a change is solar energy should be immediately seen in the surface temperature record. The system is way too entertaining to do something so obvious. Now there are parts of the system with less thermal mass that do respond quickly enough to "see" fluctuations on a shorter time scales. The Stratosphere is one such "tale tell" of changes in energy flux.
This is a chart of the lower stratosphere data from the University of Alabama, Huntsville and a Total Solar Irradiance(TSI) by Leif Svalgaard at Stanford University. The is a pretty clear correlation with a lag of 3 years. Since the Stratosphere temperature is closely linked to ozone, there is most likely some solar impact on ozone with lag of about 3 years. I have no idea what all is required to explain the exact effect or the magnitude of the impact, but there is pretty clearly some correlation.
I would recommend that BBD, blow off his preconceived notions and join the fun of solving the puzzle which has taken an unexpected turn recently.
BBD, incorrectly assumes that in a system that has stored on the order of 10^26 Joules of energy, that a change is solar energy should be immediately seen in the surface temperature record. The system is way too entertaining to do something so obvious. Now there are parts of the system with less thermal mass that do respond quickly enough to "see" fluctuations on a shorter time scales. The Stratosphere is one such "tale tell" of changes in energy flux.
This is a chart of the lower stratosphere data from the University of Alabama, Huntsville and a Total Solar Irradiance(TSI) by Leif Svalgaard at Stanford University. The is a pretty clear correlation with a lag of 3 years. Since the Stratosphere temperature is closely linked to ozone, there is most likely some solar impact on ozone with lag of about 3 years. I have no idea what all is required to explain the exact effect or the magnitude of the impact, but there is pretty clearly some correlation.
I would recommend that BBD, blow off his preconceived notions and join the fun of solving the puzzle which has taken an unexpected turn recently.
Thursday, January 24, 2013
Battery Charger
I have a buddy with an electric golf cart. During a minor storm surge of a couple of meters, his golf cart battery charger died a salty death. I wanted to borrow his bright red and chrome golf cart for the day to hit on some near geriatric babes, but the batteries were nearly dead. What's a guy to do?
The golf cart has a 48V bank of 8 6-Volts deep cycle batteries. I have a 12 Volt - 20 amp charger and some time to kill. Four hours later, after charging 4 12-Volt banks of 2 each 6 Volt batteries I have a fully charged bright 48V, red and chrome golf cart and a less than geriatric babe for company. The "average" voltage of the charger never exceeded 14.7 Volts.
It is not the voltage but how you apply it.
I didn't have to discovery any new laws of physics to get a higher useful voltage than I had available, but I couldn't make the 48V bright red and chrome golf cart a souped up 96V bright red and chrome golf cart without lots of cash. The capacity of the existing battery bank limits the soup upiness of the project.
It took me four hours to get the full charge. One pair of batteries took nearly two hours of that time and I had to add water to fill a couple of cells in those batteries. The other three banks took less time. After my hot date. My buddy plugs in his thought to be dead 48V battery charger and discovers that it was never dead. Imagine that.
What does this have to do with physics or a Tale of Two Greenhouses?
The golf cart has a 48V bank of 8 6-Volts deep cycle batteries. I have a 12 Volt - 20 amp charger and some time to kill. Four hours later, after charging 4 12-Volt banks of 2 each 6 Volt batteries I have a fully charged bright 48V, red and chrome golf cart and a less than geriatric babe for company. The "average" voltage of the charger never exceeded 14.7 Volts.
It is not the voltage but how you apply it.
I didn't have to discovery any new laws of physics to get a higher useful voltage than I had available, but I couldn't make the 48V bright red and chrome golf cart a souped up 96V bright red and chrome golf cart without lots of cash. The capacity of the existing battery bank limits the soup upiness of the project.
It took me four hours to get the full charge. One pair of batteries took nearly two hours of that time and I had to add water to fill a couple of cells in those batteries. The other three banks took less time. After my hot date. My buddy plugs in his thought to be dead 48V battery charger and discovers that it was never dead. Imagine that.
What does this have to do with physics or a Tale of Two Greenhouses?
Wednesday, January 16, 2013
Static Globe Model
The static model is a work in progress. Since there is a difference between the Northern and Southern Hemisphere thermal characteristics and the internal heat flow between hemispheres can take centuries to come to some level of equilibrium, the static model would need to be sliced and diced into sections to be really useful. My slicing and dicing though can be difficult to understand. The sketch above may make my ramblings easier to follow.
Staring from the outside, the 67 green shell represents an isothermal envelope that encloses the entire surface of the globe at all times. That shell or outer spherical surface, is not an arbitrary selection. 67 Wm-2 is: the equivalent energy of a surface at 185 K degrees (-88 C degrees) which is roughly the lowest temperature even measured at the surface of the Earth, the black body temperature of Venus, the approximate minimum temperature of the surface of Mars, the minimum out going long wave radiant energy of deep convection clouds on Earth and an all around nice number. The surface area of that shell can vary, but due to gravity and lack of significant convection, is relatively stable.
The inner blue "shell" is what I call the moist air envelope. The 316 Wm-2 is the equivalent energy of a surface at 0 C degrees, the freezing point temperature of fresh water. This shell is less stable and only covers approximately 70% of the true surface of the Earth. Since salt water has a lower freezing point, the orange arrows represent the approximate range of likely variability with respect to absolute temperature. This envelope or shell can and does expand, contract and shift over portions of the true surface.
The yellow shell or envelope is the largest question. The temperature and energy of that shell is unstable, covers an unknown fraction of the true surface and is likely completely useless, but that was the "reference" shell selected by others I am forced to use.
This post is a supplement to a supplement, Tale of Two Greenhouses Part Deux Supplemental. Instead of having to imagine a blue torus surrounding the belly of Earth, now you have a not very spherical core that is the approximate true "black body" portion of Earth.
Staring from the outside, the 67 green shell represents an isothermal envelope that encloses the entire surface of the globe at all times. That shell or outer spherical surface, is not an arbitrary selection. 67 Wm-2 is: the equivalent energy of a surface at 185 K degrees (-88 C degrees) which is roughly the lowest temperature even measured at the surface of the Earth, the black body temperature of Venus, the approximate minimum temperature of the surface of Mars, the minimum out going long wave radiant energy of deep convection clouds on Earth and an all around nice number. The surface area of that shell can vary, but due to gravity and lack of significant convection, is relatively stable.
The inner blue "shell" is what I call the moist air envelope. The 316 Wm-2 is the equivalent energy of a surface at 0 C degrees, the freezing point temperature of fresh water. This shell is less stable and only covers approximately 70% of the true surface of the Earth. Since salt water has a lower freezing point, the orange arrows represent the approximate range of likely variability with respect to absolute temperature. This envelope or shell can and does expand, contract and shift over portions of the true surface.
The yellow shell or envelope is the largest question. The temperature and energy of that shell is unstable, covers an unknown fraction of the true surface and is likely completely useless, but that was the "reference" shell selected by others I am forced to use.
This post is a supplement to a supplement, Tale of Two Greenhouses Part Deux Supplemental. Instead of having to imagine a blue torus surrounding the belly of Earth, now you have a not very spherical core that is the approximate true "black body" portion of Earth.
Tale of Two Greenhouses part Three
The surface area of the band between 55S and 65S is roughly 22 million kilometers squared, is nearly all ocean and has sea water flow of rough 120 Sverdrup. One Sverdrup is 1 million cubic meters of water per second. The Gulf Stream flow is roughly 30 Sverdrup. In the drawing above borrowed from the Toggweilder and Samuels, Effect of the Drake Passage on the Global Thermhaline Circulation paper in the link, the average surface temperature of the 55S to 65S band is 1 degree. If that band was blocked, there would be less surface mixing of the deep ocean and cold polar water. Since the area of that band is about 22 million kilometers squared or 4 percent of the total surface area of the planet, the average temperature of that band roughly 4 C cooler that it would have been without that flow and with the total being roughly 120 million cubic meters per second, the impact of that Antarctic Circumpolar Current could be 6 times 10^22 joules per year. Since 1960, the heat capacity of the ocean is estimated to have increased by 24 time 10^22 Joules or about 0.5 time 10^22 joules per year. To some it would appear to be obvious, that the Drake Passage opening could cause some climate change.
This is where the battle of the models starts. In part deux (with the supplement linked) I estimated that the current ocean "average" temperature produces a near ideal black body shell or surface with a near constant radiance of 334 Wm-2. That surface only covers ~ 70% of the total surface of the planet. Since it only covers 70% of the planet, that energy would appear to be .7*334=233.8 Wm-2 spread over the entire surface of the planet. The remaining 30% of the Earth's surface doesn't have the surface material properties required to produce a near constant energy flux, unless it is covered with sufficient thermal mass, read that as ice, snow, or water. Since the areas outside of what I describe as a moisture envelope do have huge volumes of ice, snow or water at times which has a latent heat or enthalpy of fusion of 334 Joules per gram and one can approximate Joules per gram as Watts per meter square, the melting and freezing of the ice and snow would provide a "buffer" to the 334 Wm-2 approximate ideal black body surface that only covers 70% of the surface of the Earth. Changing the storage area and volume of ice, snow and water on the 30% less than ideal portion of the true "surface" would change the "buffering" capacity.
When ice and snow melts, it remains at a near constant temperature of 0 C degrees which has an effective radiant energy of 316 Wm-2 while ~334 Wm-2 of energy is absorbed, when ice and snow forms, it releases ~334 Wm-2 while remaining at a near constant temperature of 0C and 316 Wm-2 effective energy, a difference of 18 Wm-2.
In the oceans, due to salt depression of the temperature of fusion, the range increases to roughly -2 C (307 Wm-2) with the same latent heat of fusion, 334 Joules per gram. That increases the range to ~ 27 Wm-2.
Many will find some reason to question the validity of assuming a melting area of snow would produce an effective 334 Wm-2, but mixing less that ideal radiant surfaces with complex internal heat transfer and storage capabilities is also questionable. Let the better model prevail.
The Drake Passage opening did two things, it limited the surface area at the southern pole to store the buffering ice and snow and it improved the thermal mixing of the oceans. The Antarctic continent became thermally insulated or isolated depending on personal preference. Once isolated, the band of surface from ~55S to 65S remains in the range of roughly 4 C to -2 C at the edge of the Antarctic sea ice.
With a total volume of the oceans of approximately 1.3 billion cubic kilometers and one billion cubic meters per cubic kilometer, the Drake Passage could provide a complete over turning of the world's oceans in approximately 412 years at 100 Sverdrup per second if I got the math right. Equilibrium after a respond to any change in forcing should require at least 412 years to be realized if the mixing is ideal. Ocean condition we measure now may well be due to changes 412 year ago for ideal to 4120 years ago if the mixing is only 10% efficient.
While the static model is less than perfect, it indicates an equilibrium "sensitivity" of 4.8 Wm-2 per degree, with an estimated time constant of a minimum of 412 years with a range of "average" ocean temperatures from 1.16 to 6.8 C if 4 C is the very long term "average" temperature of the oceans. That would make climate sensitivity to ALL forcing is limited to 5.6 C degrees or a maximum upper limit from today's 4 C oceans to 2.75C as the maximum increase likely due to ALL forcing from today's conditions.
Interesting?
This is where the battle of the models starts. In part deux (with the supplement linked) I estimated that the current ocean "average" temperature produces a near ideal black body shell or surface with a near constant radiance of 334 Wm-2. That surface only covers ~ 70% of the total surface of the planet. Since it only covers 70% of the planet, that energy would appear to be .7*334=233.8 Wm-2 spread over the entire surface of the planet. The remaining 30% of the Earth's surface doesn't have the surface material properties required to produce a near constant energy flux, unless it is covered with sufficient thermal mass, read that as ice, snow, or water. Since the areas outside of what I describe as a moisture envelope do have huge volumes of ice, snow or water at times which has a latent heat or enthalpy of fusion of 334 Joules per gram and one can approximate Joules per gram as Watts per meter square, the melting and freezing of the ice and snow would provide a "buffer" to the 334 Wm-2 approximate ideal black body surface that only covers 70% of the surface of the Earth. Changing the storage area and volume of ice, snow and water on the 30% less than ideal portion of the true "surface" would change the "buffering" capacity.
When ice and snow melts, it remains at a near constant temperature of 0 C degrees which has an effective radiant energy of 316 Wm-2 while ~334 Wm-2 of energy is absorbed, when ice and snow forms, it releases ~334 Wm-2 while remaining at a near constant temperature of 0C and 316 Wm-2 effective energy, a difference of 18 Wm-2.
In the oceans, due to salt depression of the temperature of fusion, the range increases to roughly -2 C (307 Wm-2) with the same latent heat of fusion, 334 Joules per gram. That increases the range to ~ 27 Wm-2.
Many will find some reason to question the validity of assuming a melting area of snow would produce an effective 334 Wm-2, but mixing less that ideal radiant surfaces with complex internal heat transfer and storage capabilities is also questionable. Let the better model prevail.
The Drake Passage opening did two things, it limited the surface area at the southern pole to store the buffering ice and snow and it improved the thermal mixing of the oceans. The Antarctic continent became thermally insulated or isolated depending on personal preference. Once isolated, the band of surface from ~55S to 65S remains in the range of roughly 4 C to -2 C at the edge of the Antarctic sea ice.
With a total volume of the oceans of approximately 1.3 billion cubic kilometers and one billion cubic meters per cubic kilometer, the Drake Passage could provide a complete over turning of the world's oceans in approximately 412 years at 100 Sverdrup per second if I got the math right. Equilibrium after a respond to any change in forcing should require at least 412 years to be realized if the mixing is ideal. Ocean condition we measure now may well be due to changes 412 year ago for ideal to 4120 years ago if the mixing is only 10% efficient.
While the static model is less than perfect, it indicates an equilibrium "sensitivity" of 4.8 Wm-2 per degree, with an estimated time constant of a minimum of 412 years with a range of "average" ocean temperatures from 1.16 to 6.8 C if 4 C is the very long term "average" temperature of the oceans. That would make climate sensitivity to ALL forcing is limited to 5.6 C degrees or a maximum upper limit from today's 4 C oceans to 2.75C as the maximum increase likely due to ALL forcing from today's conditions.
Interesting?
Tale of Two Greenhouses part deux supplimental
Near ideal thermodynamic boundary layers are required for a greenhouse or greenhouse effect. Near ideal radiant boundary layers are required for a black body. In part deux, ocean thermoclines were introduced which meet the requirements of an "ideal" thermodynamic boundary layer.
With solar energy being able to penetrate the thermocline causing subsurface warming and above surface cooling causing sinking water with increase density, when heat transfer between the rising deeper water and sinking shallower water are near equal, a true isothermal layer is formed. Since advective or horizontal energy flow has the near zero as well, the stable latent fusion temperature of water limits advection. The layer is not only isothermal but effectively isostatic.
If the freezing point of water were exactly equal to the isothermal energy of the thermocline, then the thermocline would effectively be an ideal black body. The lighter blue arrows indicate the latent heat of fusion force vectors opposing the isostatic thermocline. Since there is a difference, the orange vectors would indicate the degree of imperfection of the thermocline with respect to a perfect black body emission of unity. Assuming the values are correct, the emissivity of a 4C liquid water thermocline would be (334-18)/334= 0.946 unitless. Since the layer is not perfectly isostatic or an ideal black body, the energy imbalance would lead to expansion of the layer as shown. If the layer is truly stable, isostatic, then the less than perfect emissivity has to be contained in all directions.
Since the static model is reduced to pure energy vectors, the energy required to maintain an isostatic condition can be of any form or combination. A major issue with the isostatic model is that the controlling energy of the envelope is not fixed. The salt in the oceans depresses the temperature of the heat of fusion. The latent heat or enthalpy of fusion is fixed at 334 Joules per gram. With water releasing energy during freezing and having to gain energy to thaw, over some period of time the requirement of static equilibrium would need to be met or there would not be liquid water. Freezing occurs at the poles and the isostatic envelolpe has to be maintained in all directions. The rate of ice formation and melt would be an indication of the magnitude of the balancing energy vectors in other directions. Because the value of the isostatic envelope is at the freezing point of fresh water, energy can be dissipated or accumulated over time allowing non-equilibrium but stable conditions.
Now the fun begins.
Based on the lowest temperature ever recorded on Earth, the first true isothermal shell that can enclose the entire planet is 184K degrees with an equivalent energy vector of 67 Wm-2. The near perfect ocean black body shell does not uniformly cover the "surface" and due to equatorial solar energy input, you have to consider the "surface" a torus enclosed by an ideal radiant shell. Since the ocean shell effectively covers only 70% of the physical surface, the "average" energy radiated over the entire surface would be 70% of 334 Wm-2 or 233.8 Wm-2. Since the land portion of the surface has some heat capacity but poor emissive properties with respect to an ideal black body, the impact of the land surface area on long term atmopsheric forcing is not easily determined.
One clue is the 67 Wm-2 radiant shell. If all the energy of that shell is provided by the "surface", the total energy of the shell would need to be twice 67 or 134 Wm-2 to be stable since it has to emit both outwardly and inwardly from an isothermal spherical surface. With 233.8 Wm-2 as the "effective" surface radiant energy and 134 Wm-2 the total energy of the enclosing shell, the "effective" magnitude of the atmospheric portion of the "greenhouse effect" would be to raise the surface energy to 367.8 Wm-2 or roughly 30 Wm-2 less that the best estimate of the "average" "surface" energy.
The atmospheric"window" energy or the estimated amount of energy that does not effectively interact with the atmosphere on its path to space, is approximately 40 Wm-2 or 10 Wm-2 more than what would be indicated by the static model. That could be due to a number of issues, but since the atmosphere does absorb solar energy directly, an intermediate shell representing the turbulent and chaotic atmosphere may reduce that uncertainty.
According to Graeme Stephens et al, the approximate amount of solar energy absorbed by the atmosphere is 76 Wm-2. Since that is supposedly the total solar energy absorbed, the imaginary atmospheric shell would emit approximate 38 Wm-2 inwardly and outwardly in addition to the energy provided by all types of energy transferred from the "surface" to that shell. Given the uncertainty of the ocean "shell"and the 67 Wm-2 radiant "shell" plus or minus 10 Wm-2 drift may be the norm or due to a combination of the model limitations and data accuracy.
The imaginary atmospheric shell is not likely to enclose the entire true surface of the planet. The data provided assumes that it is a "true" average for the entire true surface, but it is more likely to apply to the area covered by the 316 Wm-2 isostatic enclosure. As the area and volume of the 316 Wm-2 enclosure expands and contracts the impact would vary requiring a more accurate approximation of the "average" area of the enclosure. Luckily, the liquid water portion of the two greenhouse effects is more stable.
Using Stephens et al. estimates for latent and sensible cooling 112 Wm-2 is transferred from the surface to the atmosphere. The 18 Wm-2 initial imbalance is likely included in that total and if you assume that the impact of radiant transfer is not significant inside this moist air envelope at near surface density and pressure, most of that 112 Wm-2 would be produced by ~ 70% of the true surface, the area in side the moist air or 316 Wm-2 envelope, in keeping with the original use of the model.
Since the latent portion would be associated with evaporation and condensation of water, roughly 70% of the Earth's surface would produce the 88 Wm-2 "global" latent cooling so the 88 indicated in the sketch above would more accurately be 125 Wm-2 for this model configuration. The 24 Wm-2 sensible portion would not have to be confined to the 316 Wm-2 envelope, but the majority likely does originate in this envelope.
The fun I mentioned is that if properly accounted for, the static model will produce fairly accurate values for "equilibrium" conditions of a general "average" state. The problem is that all reference layers will have to produce that same results. With the model roughly adjusted for the estimated 334 Wm-2 "average" energy of the oceans, the uncertainty is at least 10 Wm-2 which is not bad considering that the Stephens et al. Energy Budget has a margin of error of +/- 17 Wm-2 at the surface. A great deal of the uncertainty is at the poles. With fresh ice having a freezing point of 0 C or 316 Wm-2 effective energy and salt water having a depressed freezing point of up to roughly -2 C, there is nearly a 10 Wm-2 range of realistic sink energy. That would mean that the effective energy of the isothermal layer would have the same range which is roughly a +/- 1 C degree margin of error for the most stable reference shell. That +/- 1 C variation in the deep ocean temperature is obvious in the ocean paleo data which tend to support the static model, but the range of normal variation is greater than the estimated impact of an extra 4 Wm-2 of atmospheric forcing.
Assuming I haven't screwed up too badly this is were we are. The light blue envelopes represent roughly 70% of the true surface, the green 67 Wm-2 is the closest to a true radiant surface that can exist and in between is at lest one but like more useful but imaginary surfaces. Since the model is static, the faces of the two moist air envelopes would be in balance. The could expand or contract, but are fairly stable. The distance from the green radiant layer to the yellow intermediate layer can fluctuate in some range for zero in that Antarctic in an exceptionally cold winter to roughly 30 kilometers above the tropical "surface". That change in distance is small relative to the radius of the surface, but at the poles, that distance is more significant with respect to the moist air envelope. The relative position of the yellow is dependent on the energy flow out of the blue envelope that cover only roughly 70% of the surface.
Tidying up some values, the 18 Wm-2 imbalance of the moist air layer would have a "global" impact of 0.7*18=12.6 Wm-2, which is the minimum uncertainty. The 88 Wm-2 "global" latent energy flux would be equal to ~126 Wm-2 from the moist layer and the 24 Wm-2 "global" sensible energy flux would be up to ~34 Wm-2 if it is all produced in the moist air layer or envelope.
I know that this is confusing, but since the only two reliable "black body" surfaces, the 67 Wm-2 pure radiant layer and the 334 Wm-2 approximate liquid ocean sub surface or "average" energy, vary in surface area by approximately 160 million kilometers squared, meticulous detail is required or the error can be +/- 30% .
I can try to make a simpler model and prettier drawings, but a minimum uncertainty of 12.6 Wm-2 versus the +/- 17 Wm-2 estimated by Stephens et al and a potential of 30% error due to irregularly oriented and sized "black body" surfaces, is not going to go away.
With solar energy being able to penetrate the thermocline causing subsurface warming and above surface cooling causing sinking water with increase density, when heat transfer between the rising deeper water and sinking shallower water are near equal, a true isothermal layer is formed. Since advective or horizontal energy flow has the near zero as well, the stable latent fusion temperature of water limits advection. The layer is not only isothermal but effectively isostatic.
If the freezing point of water were exactly equal to the isothermal energy of the thermocline, then the thermocline would effectively be an ideal black body. The lighter blue arrows indicate the latent heat of fusion force vectors opposing the isostatic thermocline. Since there is a difference, the orange vectors would indicate the degree of imperfection of the thermocline with respect to a perfect black body emission of unity. Assuming the values are correct, the emissivity of a 4C liquid water thermocline would be (334-18)/334= 0.946 unitless. Since the layer is not perfectly isostatic or an ideal black body, the energy imbalance would lead to expansion of the layer as shown. If the layer is truly stable, isostatic, then the less than perfect emissivity has to be contained in all directions.
Since the static model is reduced to pure energy vectors, the energy required to maintain an isostatic condition can be of any form or combination. A major issue with the isostatic model is that the controlling energy of the envelope is not fixed. The salt in the oceans depresses the temperature of the heat of fusion. The latent heat or enthalpy of fusion is fixed at 334 Joules per gram. With water releasing energy during freezing and having to gain energy to thaw, over some period of time the requirement of static equilibrium would need to be met or there would not be liquid water. Freezing occurs at the poles and the isostatic envelolpe has to be maintained in all directions. The rate of ice formation and melt would be an indication of the magnitude of the balancing energy vectors in other directions. Because the value of the isostatic envelope is at the freezing point of fresh water, energy can be dissipated or accumulated over time allowing non-equilibrium but stable conditions.
Now the fun begins.
Based on the lowest temperature ever recorded on Earth, the first true isothermal shell that can enclose the entire planet is 184K degrees with an equivalent energy vector of 67 Wm-2. The near perfect ocean black body shell does not uniformly cover the "surface" and due to equatorial solar energy input, you have to consider the "surface" a torus enclosed by an ideal radiant shell. Since the ocean shell effectively covers only 70% of the physical surface, the "average" energy radiated over the entire surface would be 70% of 334 Wm-2 or 233.8 Wm-2. Since the land portion of the surface has some heat capacity but poor emissive properties with respect to an ideal black body, the impact of the land surface area on long term atmopsheric forcing is not easily determined.
One clue is the 67 Wm-2 radiant shell. If all the energy of that shell is provided by the "surface", the total energy of the shell would need to be twice 67 or 134 Wm-2 to be stable since it has to emit both outwardly and inwardly from an isothermal spherical surface. With 233.8 Wm-2 as the "effective" surface radiant energy and 134 Wm-2 the total energy of the enclosing shell, the "effective" magnitude of the atmospheric portion of the "greenhouse effect" would be to raise the surface energy to 367.8 Wm-2 or roughly 30 Wm-2 less that the best estimate of the "average" "surface" energy.
The atmospheric"window" energy or the estimated amount of energy that does not effectively interact with the atmosphere on its path to space, is approximately 40 Wm-2 or 10 Wm-2 more than what would be indicated by the static model. That could be due to a number of issues, but since the atmosphere does absorb solar energy directly, an intermediate shell representing the turbulent and chaotic atmosphere may reduce that uncertainty.
According to Graeme Stephens et al, the approximate amount of solar energy absorbed by the atmosphere is 76 Wm-2. Since that is supposedly the total solar energy absorbed, the imaginary atmospheric shell would emit approximate 38 Wm-2 inwardly and outwardly in addition to the energy provided by all types of energy transferred from the "surface" to that shell. Given the uncertainty of the ocean "shell"and the 67 Wm-2 radiant "shell" plus or minus 10 Wm-2 drift may be the norm or due to a combination of the model limitations and data accuracy.
The imaginary atmospheric shell is not likely to enclose the entire true surface of the planet. The data provided assumes that it is a "true" average for the entire true surface, but it is more likely to apply to the area covered by the 316 Wm-2 isostatic enclosure. As the area and volume of the 316 Wm-2 enclosure expands and contracts the impact would vary requiring a more accurate approximation of the "average" area of the enclosure. Luckily, the liquid water portion of the two greenhouse effects is more stable.
Using Stephens et al. estimates for latent and sensible cooling 112 Wm-2 is transferred from the surface to the atmosphere. The 18 Wm-2 initial imbalance is likely included in that total and if you assume that the impact of radiant transfer is not significant inside this moist air envelope at near surface density and pressure, most of that 112 Wm-2 would be produced by ~ 70% of the true surface, the area in side the moist air or 316 Wm-2 envelope, in keeping with the original use of the model.
Since the latent portion would be associated with evaporation and condensation of water, roughly 70% of the Earth's surface would produce the 88 Wm-2 "global" latent cooling so the 88 indicated in the sketch above would more accurately be 125 Wm-2 for this model configuration. The 24 Wm-2 sensible portion would not have to be confined to the 316 Wm-2 envelope, but the majority likely does originate in this envelope.
The fun I mentioned is that if properly accounted for, the static model will produce fairly accurate values for "equilibrium" conditions of a general "average" state. The problem is that all reference layers will have to produce that same results. With the model roughly adjusted for the estimated 334 Wm-2 "average" energy of the oceans, the uncertainty is at least 10 Wm-2 which is not bad considering that the Stephens et al. Energy Budget has a margin of error of +/- 17 Wm-2 at the surface. A great deal of the uncertainty is at the poles. With fresh ice having a freezing point of 0 C or 316 Wm-2 effective energy and salt water having a depressed freezing point of up to roughly -2 C, there is nearly a 10 Wm-2 range of realistic sink energy. That would mean that the effective energy of the isothermal layer would have the same range which is roughly a +/- 1 C degree margin of error for the most stable reference shell. That +/- 1 C variation in the deep ocean temperature is obvious in the ocean paleo data which tend to support the static model, but the range of normal variation is greater than the estimated impact of an extra 4 Wm-2 of atmospheric forcing.
Assuming I haven't screwed up too badly this is were we are. The light blue envelopes represent roughly 70% of the true surface, the green 67 Wm-2 is the closest to a true radiant surface that can exist and in between is at lest one but like more useful but imaginary surfaces. Since the model is static, the faces of the two moist air envelopes would be in balance. The could expand or contract, but are fairly stable. The distance from the green radiant layer to the yellow intermediate layer can fluctuate in some range for zero in that Antarctic in an exceptionally cold winter to roughly 30 kilometers above the tropical "surface". That change in distance is small relative to the radius of the surface, but at the poles, that distance is more significant with respect to the moist air envelope. The relative position of the yellow is dependent on the energy flow out of the blue envelope that cover only roughly 70% of the surface.
Tidying up some values, the 18 Wm-2 imbalance of the moist air layer would have a "global" impact of 0.7*18=12.6 Wm-2, which is the minimum uncertainty. The 88 Wm-2 "global" latent energy flux would be equal to ~126 Wm-2 from the moist layer and the 24 Wm-2 "global" sensible energy flux would be up to ~34 Wm-2 if it is all produced in the moist air layer or envelope.
I know that this is confusing, but since the only two reliable "black body" surfaces, the 67 Wm-2 pure radiant layer and the 334 Wm-2 approximate liquid ocean sub surface or "average" energy, vary in surface area by approximately 160 million kilometers squared, meticulous detail is required or the error can be +/- 30% .
I can try to make a simpler model and prettier drawings, but a minimum uncertainty of 12.6 Wm-2 versus the +/- 17 Wm-2 estimated by Stephens et al and a potential of 30% error due to irregularly oriented and sized "black body" surfaces, is not going to go away.
Monday, January 14, 2013
Tale of Two Greenhouse part deux
The oceans really should be the key for determining the impact of change in any condition that would impact long term climate. Shorter term climate, is little more than weather, it just what is considered "short term".
A good analogy for the oceans would be a battery. If the oceans were perfectly well mixed, based on the current best estimates, the temperature would be 4 C degrees. If the oceans completely covered the planet, then the average surface temperature of Earth would be 4 C degrees, for some time period. How long doesn't matter for this example.
With a hypothetical atmosphere free planet, the effective radiant layer of the Earth would emit 334 Wm-2. Since that surface on average receives ~340 Wm-2, the emissivity of the surface if both number were correct would be 334/340 = 0.98235 a unit less value for this example. The Earth would be effectively a perfect black body. The Earth though is not 100% cover with water and that water is not perfectly mixed, but let's continue with this example.
Since only 70% of the surface is covered with water that we are assuming is well mixed, that portion would emit at 334 Wm-2 but the total emitted, provided the remaining 30% land area does not emit, would be 70% of 334 or ~234 Wm-2.
I have previously posted that the Faint Young Sun Paradox is not, if you consider that the oceans can absorb more energy that they can emit until the total open water area increases enough to allow equilibrium. This is due to the truism, that a radiant surface cannot emit energy any faster than energy can be transferred to the radiant surface.
Update: Since I tried to be too brief and mis-typed.
More on the Static model
Sunlight penetrates the ocean surface and is absorb at depths of over 100 meters in small amounts. Surface water that cools becomes more dense and sinks. The combination of solar warmed deeper water rising with cooling surface water sinking produces one or more uniform isothermal layers know as a thermoclines. These isothermal layers limit heat gain from the surface and heat loss from below, creating an insulating layer where only the deeper penetrating solar short wave energy can warm the depths. Until the rate of heat loss at the surface equals the rate of heat gain from solar at depth are equal, the oceans continue to gain energy. Since time is not a factor for Earth, all it takes is a slight imbalance over enough time to produce the Earth as we know it today.
Had "Climate Science" started with this basic line of reasoning, life would have been simpler.
Since the Earth does have an atmosphere and does not have a nearly perfect black body surface that covers the entire "surface" it is easy to wander behind the little animals wondering how the Earth can be as warm as it is with as little energy that it appears to receive. This is aided by not thinking of the battery analogy. While the entire "surface" may not be receiving as much energy, once a battery is charged, it does not require as much energy to maintain that charge. Since the oceans can receive more than 1000 Wm-2 near the equator on a cloudless noontime, the temperature of the equatorial oceans is limited by the rate that the ocean battery can accept a charge. The deeper penetrating short wave energy from the sun, trickle charges the depths, but is insulated from surface heat diffusion by the direction of convection.
With an "average" sea surface temperature of approximately 21.1 C which would have a radiant energy of ~425 Wm-2 between the more stable 4 C "average" temperature of the deeper oceans, the "equilibrium" flux from the surface to the "average" would be 425-334=91 Wm-2 +/- a touch. There are a lot of quote marks in that sentence. The use of "equilibrium" is in the eye of the beholder. I use a "static" model to determine stable conditions, so this would be my use of equilibrium. Since the energy is flowing, you could call it a steady state or you could limit the "model" and call it a "conditional equilibrium" or "conditional steady state". Whatever floats yer boat. The "average" is in quotes is because it is an approximation.
Some of you will now complain that there is not enough math to justify my choices so far. Well, 95% of solution is in setting up the problem. If there are those hell bent on some kick butt math, why don't they take another look at that 91 Wm-2, "equilibrium" flux. That just happens to be approximately the latent heat flux from the "surface". The loss of that latent heat would create a new "surface" which when "averaged" across the total true surface produce the ~234 Wm-2 "surface" viewed from space in the infrared. Earth is not a perfect black body, but 70% of it is close.
From this point things begin to be complicated. Now the atmosphere with its water, ice, water vapor, ozone, CO2 and O2, absorb solar energy. There is a temperature inversion starting at the tropopause much like the temperature inversion below the Earth's surface. With the static model, any change above the ocean surface has to be matched, eventually, below the ocean surface. With either change, the energy flow from the equator to the poles would also have to change, to regain the "static equilibrium". It may be a simple model, but it does have rules. The oceans are more dependent on the "peak" energy available and the atmosphere, which does not hold a charge well, the "average" energy.
There should be a part three before long. This ends here since it is part of an explanation for a denizen on another blog.
A good analogy for the oceans would be a battery. If the oceans were perfectly well mixed, based on the current best estimates, the temperature would be 4 C degrees. If the oceans completely covered the planet, then the average surface temperature of Earth would be 4 C degrees, for some time period. How long doesn't matter for this example.
With a hypothetical atmosphere free planet, the effective radiant layer of the Earth would emit 334 Wm-2. Since that surface on average receives ~340 Wm-2, the emissivity of the surface if both number were correct would be 334/340 = 0.98235 a unit less value for this example. The Earth would be effectively a perfect black body. The Earth though is not 100% cover with water and that water is not perfectly mixed, but let's continue with this example.
Since only 70% of the surface is covered with water that we are assuming is well mixed, that portion would emit at 334 Wm-2 but the total emitted, provided the remaining 30% land area does not emit, would be 70% of 334 or ~234 Wm-2.
I have previously posted that the Faint Young Sun Paradox is not, if you consider that the oceans can absorb more energy that they can emit until the total open water area increases enough to allow equilibrium. This is due to the truism, that a radiant surface cannot emit energy any faster than energy can be transferred to the radiant surface.
Update: Since I tried to be too brief and mis-typed.
More on the Static model
Sunlight penetrates the ocean surface and is absorb at depths of over 100 meters in small amounts. Surface water that cools becomes more dense and sinks. The combination of solar warmed deeper water rising with cooling surface water sinking produces one or more uniform isothermal layers know as a thermoclines. These isothermal layers limit heat gain from the surface and heat loss from below, creating an insulating layer where only the deeper penetrating solar short wave energy can warm the depths. Until the rate of heat loss at the surface equals the rate of heat gain from solar at depth are equal, the oceans continue to gain energy. Since time is not a factor for Earth, all it takes is a slight imbalance over enough time to produce the Earth as we know it today.
Had "Climate Science" started with this basic line of reasoning, life would have been simpler.
Since the Earth does have an atmosphere and does not have a nearly perfect black body surface that covers the entire "surface" it is easy to wander behind the little animals wondering how the Earth can be as warm as it is with as little energy that it appears to receive. This is aided by not thinking of the battery analogy. While the entire "surface" may not be receiving as much energy, once a battery is charged, it does not require as much energy to maintain that charge. Since the oceans can receive more than 1000 Wm-2 near the equator on a cloudless noontime, the temperature of the equatorial oceans is limited by the rate that the ocean battery can accept a charge. The deeper penetrating short wave energy from the sun, trickle charges the depths, but is insulated from surface heat diffusion by the direction of convection.
With an "average" sea surface temperature of approximately 21.1 C which would have a radiant energy of ~425 Wm-2 between the more stable 4 C "average" temperature of the deeper oceans, the "equilibrium" flux from the surface to the "average" would be 425-334=91 Wm-2 +/- a touch. There are a lot of quote marks in that sentence. The use of "equilibrium" is in the eye of the beholder. I use a "static" model to determine stable conditions, so this would be my use of equilibrium. Since the energy is flowing, you could call it a steady state or you could limit the "model" and call it a "conditional equilibrium" or "conditional steady state". Whatever floats yer boat. The "average" is in quotes is because it is an approximation.
Some of you will now complain that there is not enough math to justify my choices so far. Well, 95% of solution is in setting up the problem. If there are those hell bent on some kick butt math, why don't they take another look at that 91 Wm-2, "equilibrium" flux. That just happens to be approximately the latent heat flux from the "surface". The loss of that latent heat would create a new "surface" which when "averaged" across the total true surface produce the ~234 Wm-2 "surface" viewed from space in the infrared. Earth is not a perfect black body, but 70% of it is close.
From this point things begin to be complicated. Now the atmosphere with its water, ice, water vapor, ozone, CO2 and O2, absorb solar energy. There is a temperature inversion starting at the tropopause much like the temperature inversion below the Earth's surface. With the static model, any change above the ocean surface has to be matched, eventually, below the ocean surface. With either change, the energy flow from the equator to the poles would also have to change, to regain the "static equilibrium". It may be a simple model, but it does have rules. The oceans are more dependent on the "peak" energy available and the atmosphere, which does not hold a charge well, the "average" energy.
There should be a part three before long. This ends here since it is part of an explanation for a denizen on another blog.
A Tale of Two Greenhouse - PG version
I started this blog just to play around and look at simpler ways to explain stuff than is typically found on the internet. With a little common sense and basic math, most of the complex physics in nature can be reduced to a level that is useful, provided you consider the uncertainty involved with "rules of thumb".
Rules of Thumb, provide useful "ball parks" that can help you figure out what you need to figure out. If the "ball park" is too big to be useful, then you look for ways to refine the estimate using the "ball park" as an "envelope" of uncertainty. You sneak up on the solution.
When I wrote the original Tale of Two Greenhouses, I was just introducing a simple static model intended to show that there are two volumes that respond to changes in energy flow, the oceans and the atmosphere, and that the one with the most heat storage capacity, the oceans, is the more important one to consider. To me, that is so obvious it is difficult to explain.
Since the heat capacity of the oceans is about 1000 times greater than the atmosphere, I find the whole greenhouse effect debate laughable. I used the approximate average temperature of the larger "thermal mass" which is about 4 C degrees, to calculate the approximate impact that adding a thin extra layer of insulation would produce. It is really not rocket science. The oceans are currently at ~4 C degrees. We are adding 3.7 Wm-2 of extra insulation, the effective radiant energy of the oceans at ~4 C degrees is ~334 Wm-2 which would be roughly equal to the "effective" current insulation value, adding 4 Wm-2 to 334 Wm-2 results in 338 Wm-2 which would have an equivalent "average" temperature of approximately 4.825 C degrees or a warming of ~0.825 degrees. That is the "ball park". Any larger estimate requires big time assumptions,which in a complex system will typically not happen like assumed.
For an analogy, let's say you have a boat. During the day, inside that boat can get hot because the sun heats the deck. Just before dawn, the temperature inside that boat will be whatever the temperature of the water is that the boat is floating on plus or minus a touch. If you cover the deck with a lot of reflective tarps to shade the deck, it won't get as hot in the day, but before dawn, the cabin will be at a temperature roughly equal to the water the boat is floating on. It ain't rocket science, the largest thermal mass controls the minimum temperature, so you need to figure out what controls that larger thermal mass. If you spend all of your time trying to figure out the trivial, you can lose sight of the obvious.
I will try to do a PG part two soon.
Rules of Thumb, provide useful "ball parks" that can help you figure out what you need to figure out. If the "ball park" is too big to be useful, then you look for ways to refine the estimate using the "ball park" as an "envelope" of uncertainty. You sneak up on the solution.
When I wrote the original Tale of Two Greenhouses, I was just introducing a simple static model intended to show that there are two volumes that respond to changes in energy flow, the oceans and the atmosphere, and that the one with the most heat storage capacity, the oceans, is the more important one to consider. To me, that is so obvious it is difficult to explain.
Since the heat capacity of the oceans is about 1000 times greater than the atmosphere, I find the whole greenhouse effect debate laughable. I used the approximate average temperature of the larger "thermal mass" which is about 4 C degrees, to calculate the approximate impact that adding a thin extra layer of insulation would produce. It is really not rocket science. The oceans are currently at ~4 C degrees. We are adding 3.7 Wm-2 of extra insulation, the effective radiant energy of the oceans at ~4 C degrees is ~334 Wm-2 which would be roughly equal to the "effective" current insulation value, adding 4 Wm-2 to 334 Wm-2 results in 338 Wm-2 which would have an equivalent "average" temperature of approximately 4.825 C degrees or a warming of ~0.825 degrees. That is the "ball park". Any larger estimate requires big time assumptions,which in a complex system will typically not happen like assumed.
For an analogy, let's say you have a boat. During the day, inside that boat can get hot because the sun heats the deck. Just before dawn, the temperature inside that boat will be whatever the temperature of the water is that the boat is floating on plus or minus a touch. If you cover the deck with a lot of reflective tarps to shade the deck, it won't get as hot in the day, but before dawn, the cabin will be at a temperature roughly equal to the water the boat is floating on. It ain't rocket science, the largest thermal mass controls the minimum temperature, so you need to figure out what controls that larger thermal mass. If you spend all of your time trying to figure out the trivial, you can lose sight of the obvious.
I will try to do a PG part two soon.
Saturday, January 12, 2013
What's all the Noise?
There was a discussion today on attribution of warming. The old what causes what. For a while I have been fairly convinced that the only thing "we" have a handle on is that CO2 and other non-condensable greenhouse gases have a radiant impact. The best guess is that a doubling of the CO2 equivalent would produce roughly 1 to 1.5 degrees of warming by improving the insulation of the atmosphere by roughly 3.7 to 4 Wm-2. When I estimate attribution, I stick with what is significant. In the discussion I estimated current warming to be due to natural variability 50%, CO2 equivalent 25% and land use 25% and that was it. Immediately one of the warmers asked how I could neglect aerosols?
Aerosols have direct and indirect effects that cause cooling and/or warming and according to recent papers appear to have been grossly over-estimated. So I said I consider everything other than natural variability, CO2 equivalent and land use, just noise. This doesn't set well with just about anyone.
Well, volcanoes produce aerosols, that is natural. In the chart above is an estimate of volcanic forcing due to volcanic activity since 1960. Using the SST and satellite data, there is a response to aerosols and a rapid return to the mean trend. Since 1960, volcanoes have produced little hiccups in the meaningful temperature data, but no significant impact. Volcanic aerosols from circa 1960 are in my opinion just noise. Could there be longer term effects from these volcanic aerosols? Possibly, but they appear to be grossly over estimated pretty much like the recent papers indicate in the shorter term.
Aerosols also have warming impacts where darker particles, black carbon, can increase albedo. Compared to the dust and ash fallout due to agricultural use, that impact also appears to be small. But for some reason warmers cannot seem to understand that farming about 10 percent of the total surface of the Earth, while is may produce aerosols, is perfectly happy being labeled as a "land use" impact.
Do natural and anthropogenic aerosols have indirect effects on cloud formation? I am sure they do, but if the majority of the aerosols causing that impact, both in forming clouds, sulfates and stopping clouds from forming, dust, are due to natural or land use causes, wouldn't they be either natural or land use related?
What about smog? Well that impact is typically erased since it is associated with urban heat island effects. If it is not included in the temperature record used to determine "global warming" how would it be a factor in "global warming"?
Which is the reason I consider ocean and satellite temperature data the "reliable" data for determining "global warming". Everything else has been massaged to death and aerosols used to beat the models into submission to explain the lack of projected warming. If something like aerosols would like to actually rise up out of the noise, then I will reconsider.
Update:
For a better illustration this, using the Kapland AMO (North Atlantic SST) raw data, is the difference of 5 year extremes. The maximum value of the monthly data for a five year period minus the minimum value of the monthly data for a five year period. Using this, the 1940 extreme range is the peak and the late 1960s is the valley. There is less than a tenth of a degree different in these extremes versus the next closest extreme. There is also only three tenths of a degree difference between these extremes and the mean.
Another Update: What the heck, it is a lazy Sunday.
Notice anything?
Aerosols have direct and indirect effects that cause cooling and/or warming and according to recent papers appear to have been grossly over-estimated. So I said I consider everything other than natural variability, CO2 equivalent and land use, just noise. This doesn't set well with just about anyone.
Well, volcanoes produce aerosols, that is natural. In the chart above is an estimate of volcanic forcing due to volcanic activity since 1960. Using the SST and satellite data, there is a response to aerosols and a rapid return to the mean trend. Since 1960, volcanoes have produced little hiccups in the meaningful temperature data, but no significant impact. Volcanic aerosols from circa 1960 are in my opinion just noise. Could there be longer term effects from these volcanic aerosols? Possibly, but they appear to be grossly over estimated pretty much like the recent papers indicate in the shorter term.
Aerosols also have warming impacts where darker particles, black carbon, can increase albedo. Compared to the dust and ash fallout due to agricultural use, that impact also appears to be small. But for some reason warmers cannot seem to understand that farming about 10 percent of the total surface of the Earth, while is may produce aerosols, is perfectly happy being labeled as a "land use" impact.
Do natural and anthropogenic aerosols have indirect effects on cloud formation? I am sure they do, but if the majority of the aerosols causing that impact, both in forming clouds, sulfates and stopping clouds from forming, dust, are due to natural or land use causes, wouldn't they be either natural or land use related?
What about smog? Well that impact is typically erased since it is associated with urban heat island effects. If it is not included in the temperature record used to determine "global warming" how would it be a factor in "global warming"?
Which is the reason I consider ocean and satellite temperature data the "reliable" data for determining "global warming". Everything else has been massaged to death and aerosols used to beat the models into submission to explain the lack of projected warming. If something like aerosols would like to actually rise up out of the noise, then I will reconsider.
Update:
For a better illustration this, using the Kapland AMO (North Atlantic SST) raw data, is the difference of 5 year extremes. The maximum value of the monthly data for a five year period minus the minimum value of the monthly data for a five year period. Using this, the 1940 extreme range is the peak and the late 1960s is the valley. There is less than a tenth of a degree different in these extremes versus the next closest extreme. There is also only three tenths of a degree difference between these extremes and the mean.
Another Update: What the heck, it is a lazy Sunday.
Notice anything?
Tuesday, January 8, 2013
Hansen and Sato Paleo challenge
Hansen, J.E., and Mki. Sato, 2012: Paleoclimate implications for human-made climate change. In Climate Change: Inferences from Paleoclimate and Regional Aspects. A. Berger, F. Mesinger, and D. Šijački, Eds. Springer, pp. 21-48, doi:10.1007/978-3-7091-0973-1_2.
Paleoclimate data help us assess climate sensitivity and potential human-made climate effects. We conclude that Earth in the warmest interglacial periods of the past million years was less than 1°C warmer than in the Holocene. Polar warmth in these interglacials and in the Pliocene does not imply that a substantial cushion remains between today's climate and dangerous warming, but rather that Earth is poised to experience strong amplifying polar feedbacks in response to moderate global warming. Thus goals to limit human-made warming to 2°C are not sufficient — they are prescriptions for disaster. Ice sheet disintegration is nonlinear, spurred by amplifying feedbacks. We suggest that ice sheet mass loss, if warming continues unabated, will be characterized better by a doubling time for mass loss rate than by a linear trend. Satellite gravity data, though too brief to be conclusive, are consistent with a doubling time of 10 years or less, implying the possibility of multi-meter sea level rise this century. Observed accelerating ice sheet mass loss supports our conclusion that Earth's temperature now exceeds the mean Holocene value. Rapid reduction of fossil fuel emissions is required for humanity to succeed in preserving a planet resembling the one on which civilization developed.
http://pubs.giss.nasa.gov/abs/ha05510d.html
We are done! Toast! Game over! Hansen and Sato have determined that the Earth is currently less than 1 degree cooler than the warmest periods in the past MILLION years and that "Rapid reduction in the use of fossil fuels is required for humanity to succeed in preserving a planet resembling the one on which civilization developed"
The first obvious point is with 6 plus billion humans on the planet, could it possibly "resemble" the planet before the 6 plus billion? No. 6 Plus billion humans have altered the planet. If a planet resembling the Earth at the dawn of civilization is what is needed, we are F_ked.
From the abstract, Hansen and Sato would seem to be confident we are F_ked and the "ONLY" solution is to stop using fossil fuels now.
From the body of the paper, "We conclude that ocean cores provide a better measure of global temperature change than ice cores during those interglacial periods that were warmer than the pre-industrial Holocene." HS provide a number of issues with ocean cores, but generally ocean core do seem to provide a better measure of past climate than surface temperatures which are recorded in ice cores and compared to "global average surface temperature".
This chart of the "normalized" data of ocean cores versus the CO2 recorded in the Antarctic ice cores just compares the timing events for the past 50 thousand years. The warming of the oceans recorded in the Lea et al. Galapagos reconstruction appears to precede the Antarctic CO2 increase and the higher northern latitude Tdo, temperature of deep ocean and Tsurf, temperature of the ocean surface in the reconstruction by Bintanji and Van de Wal (2005), all of the data available at the ncdc.noaa.gov/paleo website.
Note that the Antarctic CO2 starts a rise circa 7.5ka BP. That could indicate that man began having an impact on climate well before the discovery of the internal combustion engine, electricity and science, but around the time of the discovery of agriculture. One on the greater uncertainties is the impact that mankind's agricultural activities would have had on the land based glacial area and persistent snow fields. Snow and ice is not conducive to agriculture, but the melt water is desirable. Agricultural activity, burning used in slash and burn agriculture, provides darker ash which tend to speed up snow and ice melt. Erosion due to wind following slash and burn or more modern plowing provide darker dust and debris that would fall on snow and ice enhancing snow and ice melt. Plus, spreading or broadcasting manure, peat dust and ash on spring snow enhances snow and ice melt. With man industriously using the simple tool of fire, man would have a large impact on the area and amount of ice and snow that hampers or helps his agricultural pursuits. This would be land use change impacting climate not fossil fuel emissions.
The conundrum that Hansen and Sato face is that the impact of that land use predating the instruments used to build a "global surface temperature" is a significant unknown. In the paper HS mentions the large non-linear impact of snow and ice albedo. "However, there is one feature in the surface albedo versus temperature scatter plots (Figs. 3e and 3f) that seems unrealistic: the tail at the warmest temperatures, where warming of 1°C produces no change of sea level or surface albedo." A little issue like responses that "seem unrealistic" might cause some to be less certain in their conclusions.
It could be that over emphasizing the radiant forcing without properly considering the less radiant based internal thermodynamics, could lead to questionable conclusions. Now that HS have "discovered" that ocean paleo appears to provide a more reliable picture of past climate, perhaps changes in the ocean currents that would impact internal heat distribution need to be considered differently.
Toggweilder and Bjornsson (2000) using an ocean model conclude, "The results here suggest that much of the full thermal effect of Drake Passage could have been realised well before the channel was very wide or very deep. This is because the mere presence of an open gap introduces an asymmetry into the system that is amplified by higher salinities in the north and lower salinities in the south. This kind of haline effect, and the possibility of increased Antarctic sea-ice and land-ice, lead us to conclude that the thermal response to the opening of Drake Passage could have been fairly abrupt and quite large, perhaps as large as the 4–5°C cooling seen in palaeoceanographic observations." The opening of the Drake Passage changed the way the oceans disperse and accumulation energy. Since radiant impacts must assume limited advection, changing the rate and pattern of advection of the primary source of energy that responds to atmospheric forcing, would impact the efficiency of the atmosphere to retain energy. Since that surface, the oceans, provides the energy that the atmosphere retains, it would seem prudent to compare apples to apples.
This chart compares the HadSST2 version of the northern and southern hemisphere sea surface temperatures. These "surface" temperatures are not the true ocean "surface" but typically sub-surface temperatures collected by ship engine raw water intakes and bucket samples during the sail era. The total warming is similar to the "global surface" warming since the sub-surface ocean data makes up some 70% of the "global surface" record.
This chart uses the Berkeley Earth Surface Temperature (BEST) data Tmin for the northern and southern hemispheres with the same smoothing. Note the SH-NH difference in both charts. There is more variation in the SST than the Tmin, but both show a shift beginning roughly at 1985. In the BEST data, the shift appears to be extraordinary and in the SST data, the shift is not.
There is a very interesting note in the Toggweilder and Bjornsson paper, "It is interesting to note in Fig. 10 that the overturning circulation initiated by an open Drake Passage has very little impact on the magnitude of tropical temperatures. This is because the same volume of cold deep water upwells through the thermocline in low latitudes whether Drake Passage is open or not."
According to the Herbert et al. 2010 tropical oceans temperature reconstruction, the Eastern Pacific which has considerable impact on climate with the ENSO fluctuations, has cooled considerably in the past 4 million years. Globally, the opening of the Drake Passage may not have had much impact on the tropical ocean temperatures, but the response of the Eastern Pacific would seem to indicate that the Drake passage could have considerable impact on the ocean Haline circulation.
As Toggweilder and Bjornsson note, "The results here, based on a coupled model run without restoring boundary conditions, suggest that the impact of an open Drake Passage is larger and more deeply ingrained in the climate system than previously supposed." The global ocean thermal asymmetry would appear to be a problem for a global radiant model that depends on symmetry to have full impact. With the opening of the Drake Passage having an impact of roughly 4 C causing ~3C of NH warming at the expense of ~3C SH cooling, Hansen and Sato's model of climate "seems unrealistic" because "global surface air temperature" is not the reference one should use on a planet covered with water.
The climate of the world has no doubt changed with 6 plus billion mouths to feed, but perhaps there is more to the story than fossil fuels.
Monday, January 7, 2013
The Real Paleo Challenge
The gauntlet has been tossed. The Franzke 2012 paper just noted that individual Eurasian surface station data rarely has a significant trend and that the few stations that do are near Iceland and coastal Scandinavia. I say that is an indication that the heat capacity of the oceans is the "driver" of the trend. The Franzke paper is a "yawner" since it is well known that individual stations are not likely to have a trend, requiring more sophisticated methods be used to "tease" out a meaningful trend.
My simple redneck logic tells me that the harder you have to work to find a relationship between two variables, the more likely it is you are looking at the wrong variables Tmin, is a more relevant variable than Tave for GHG impact, vegetation growth, survival and nearly everything that has to do with "Global" conditions. So why would I try to force fit anything to an artificial average value that sucks?
Being a redneck, I prefer to work in extremes, also known as boundary conditions. That means I would look at Tmax, Tmin then if it is required for extra credit, toss in a Tave that is nearly meaningless by itself.
Since Cook et al. decided they had to trim the most recent portion of their Taymer paleo reconstruction because it "diverged" from instrumental, showing that it doesn't significantly diverge from Tmin should show a link between Franzke's "yawner" and the real "yawner" that Tave sucks. In fact, using Tave, which varies by several degrees, instead of Tmin which varies by less than half of Tave, for a reference is not the brightest idea that climate scientists had when they started this mess. Since the "SST" is more of a Tmin value since it is take below the surface and land "Surface" temperature is actually 2 meters above the true surface, the entire "Global" Tave concept is a meaningless concept.
The challenge is to expend the least energy possible to prove the obvious.
My simple redneck logic tells me that the harder you have to work to find a relationship between two variables, the more likely it is you are looking at the wrong variables Tmin, is a more relevant variable than Tave for GHG impact, vegetation growth, survival and nearly everything that has to do with "Global" conditions. So why would I try to force fit anything to an artificial average value that sucks?
Being a redneck, I prefer to work in extremes, also known as boundary conditions. That means I would look at Tmax, Tmin then if it is required for extra credit, toss in a Tave that is nearly meaningless by itself.
Since Cook et al. decided they had to trim the most recent portion of their Taymer paleo reconstruction because it "diverged" from instrumental, showing that it doesn't significantly diverge from Tmin should show a link between Franzke's "yawner" and the real "yawner" that Tave sucks. In fact, using Tave, which varies by several degrees, instead of Tmin which varies by less than half of Tave, for a reference is not the brightest idea that climate scientists had when they started this mess. Since the "SST" is more of a Tmin value since it is take below the surface and land "Surface" temperature is actually 2 meters above the true surface, the entire "Global" Tave concept is a meaningless concept.
The challenge is to expend the least energy possible to prove the obvious.
Friday, January 4, 2013
What Statistical Methods to Use?
Two new papers on statistical methods to be used in climate science were brought up for discussion this week. C. Franzke 2012 in What is Signal and What is Noise in a post on realclimate and Beenstock et al. "AGU Bombshell" over at Wattsupwiththat. The two papers use "novel" methods to determine if statistically significant trends exist in the temperature records.
I am old school. If it takes "novel" statistical method to tell if something of "significance" happened, I have either screwed up or need to hire statisticians, plural, to prove it. More than one statistician because statistics is nearly a black art more than a science. You can find anything you look for with statistics.
Looking at CO2 and Climate, I was able to find a "signature" of CO2, but only over land and mainly over the higher altitude land areas, that I would consider statistically significant. I was not able to find a "signature" over the oceans. Using satellite data, I could find a "significant correlation" between CO2 and temperature, Solar and sea level and using "radiant layer estimates" with satellite data, a fair estimate of the ocean energy imbalance. All of those are just basic gut checks required since you cannot trust anything, data, models and your eyeballs in complex systems. That lead me to "What is the Average Global Temperature?"
Since there is a "potential" error in the "average global surface temperature" of around 1.5 to 3.5 C degrees, there has been "land" warming of around 1C degree over the past century and the "noise" in the data is close to the "land" warming over the past century, I would think it is time to phone a statistical friend. So I blew off the surface as a reliable "metric" and shifted to deep ocean temperatures that while not all that accurate are more stable and make more sense compared with the "more" reliable, IMO, satellite data. Because of that shift in my frame of reference, I am fairly confident that "sensitivity" to CO2 only using a satellite era baseline is roughly 0.8 +/- 0.2 C degrees and that "sensitivity" to the solar equivalent of a doubling of CO2 is roughly 1.6 +/- 0.4 C degrees. The solar "sensitivity" would be twice the CO2 only "sensitivity" because the atmosphere would radiantly amplify the ocean absorbed energy.
Using the deep ocean as a reference, the entire game changes. Internal variability in the deep ocean heat capacity and the location of the bulk of that capacity that changes with deep ocean current and persistent weather patterns, is more than equal to the impact of CO2 only forcing. This just happens to agree with Toggweilder et al. and a few others that have noticed rather odd methods being required to "defend" the IPCC consensus predictions.
So I believe there are two legitimate "theories", one based on faith more than fact and one based on observation, that have totally different end results. The question is which is which?
What statistical methods to use will be an important decision in that determination.
I am old school. If it takes "novel" statistical method to tell if something of "significance" happened, I have either screwed up or need to hire statisticians, plural, to prove it. More than one statistician because statistics is nearly a black art more than a science. You can find anything you look for with statistics.
Looking at CO2 and Climate, I was able to find a "signature" of CO2, but only over land and mainly over the higher altitude land areas, that I would consider statistically significant. I was not able to find a "signature" over the oceans. Using satellite data, I could find a "significant correlation" between CO2 and temperature, Solar and sea level and using "radiant layer estimates" with satellite data, a fair estimate of the ocean energy imbalance. All of those are just basic gut checks required since you cannot trust anything, data, models and your eyeballs in complex systems. That lead me to "What is the Average Global Temperature?"
Since there is a "potential" error in the "average global surface temperature" of around 1.5 to 3.5 C degrees, there has been "land" warming of around 1C degree over the past century and the "noise" in the data is close to the "land" warming over the past century, I would think it is time to phone a statistical friend. So I blew off the surface as a reliable "metric" and shifted to deep ocean temperatures that while not all that accurate are more stable and make more sense compared with the "more" reliable, IMO, satellite data. Because of that shift in my frame of reference, I am fairly confident that "sensitivity" to CO2 only using a satellite era baseline is roughly 0.8 +/- 0.2 C degrees and that "sensitivity" to the solar equivalent of a doubling of CO2 is roughly 1.6 +/- 0.4 C degrees. The solar "sensitivity" would be twice the CO2 only "sensitivity" because the atmosphere would radiantly amplify the ocean absorbed energy.
Using the deep ocean as a reference, the entire game changes. Internal variability in the deep ocean heat capacity and the location of the bulk of that capacity that changes with deep ocean current and persistent weather patterns, is more than equal to the impact of CO2 only forcing. This just happens to agree with Toggweilder et al. and a few others that have noticed rather odd methods being required to "defend" the IPCC consensus predictions.
So I believe there are two legitimate "theories", one based on faith more than fact and one based on observation, that have totally different end results. The question is which is which?
What statistical methods to use will be an important decision in that determination.
Thursday, January 3, 2013
GISS and the Millikelvin Kerfuffle
The AGU poster by Dr. Vaughan Pratt discussion on Climate Etc. continues into the New Year with more than the average amount of heat. Dr. Pratt produced a "fit" both in HadCRUT3 to AGW and in the "skeptic" community. I actually liked the concept though am a bit bummed that OpenOffice doesn't care for Dr. Pratts' Excel spread sheet macros.
Dr. Pratt's SAW would represent natural variability and the AGW curve of course the impact of CO2 primarily, since CO2 is the master of our current universe. The "fits" are impressive, but fitting an assumed function to an outdated "average" can be misleading.
The curve fit required a roughly 15 year shift and due to filtering, the ends of the data are not reliable. There is also the issue of is "average" the value to fit?
The plot above is the hemispherical difference between the northern hemisphere extratropic and the tropics and the southern hemisphere extratropics and the tropics. Since it was claimed in peer reviewed literature that "internal variability" cannot be responsible for large changes in the "global" average temperature, it has to be so! That greatly simplifies the problem since despite any evidence to the contrary, that naughty "internal variability" will average to zero over some convenient time frame. Looking at the plot above, someone not familiar with the assumed law a natural variability might think that the relationship between the northern and southern hemispheres is a tad more complex. On a "global" scale, internal variability appears to only be causing about 0.2C of the fluctuation, but since the fluctuations by hemisphere are out of phase somewhat, I would think knowing whether that is the norm or just happenstance caused by the period selected for the analysis could be a little bit important.
CO2 forcing, especially over land surfaces would have an impact on temperature. Adding CO2 would increase temperature and adding energy would increase temperature which CO2 would amplify. The plot above is consistent with the basic physics, but how would you determine what causes what?
Well despite the assume law of "natural internal variability" there are peer reviewed papers that note:
This kind of haline effect, and the possibility of increased Antarctic sea-ice and land-ice, lead us to conclude that the thermal response to the opening of Drake Passage could have been fairly abrupt
and quite large, perhaps as large as the 4–5°C cooling seen in palaeoceanographic observations.
Since the authors of that paper have also noted that "this kind of haline effect" can have irregular periods as long and longer than the 161 years of which ~21 are not used due to the methodology (filter width) and 15 year questionable due the "shift" to fit, Dr. Pratt's poster is a fine example of curve fitting that has no meaning unless "global average" is the correct metric to use, the assumed "law of natural internal variability" being inconsequential is valid and the data selection and length are a reasonable representation of the "norm", other that that, Pratt's poster rocks.
What is interesting in the other "fit", is that "skeptics" assume that the poster represents some final nail in the coffin of CO2 dominated climate discussion and has to be retracted. It is not, it is just another tool in the tool box to use for honing in on the degree of CO2 influence on climate. Pratt's fit, fits nicely into the range of comfort one would expect if one assumes that natural internal variability is close to negligible. The question is, is it?
Dr. Pratt's SAW would represent natural variability and the AGW curve of course the impact of CO2 primarily, since CO2 is the master of our current universe. The "fits" are impressive, but fitting an assumed function to an outdated "average" can be misleading.
The curve fit required a roughly 15 year shift and due to filtering, the ends of the data are not reliable. There is also the issue of is "average" the value to fit?
The plot above is the hemispherical difference between the northern hemisphere extratropic and the tropics and the southern hemisphere extratropics and the tropics. Since it was claimed in peer reviewed literature that "internal variability" cannot be responsible for large changes in the "global" average temperature, it has to be so! That greatly simplifies the problem since despite any evidence to the contrary, that naughty "internal variability" will average to zero over some convenient time frame. Looking at the plot above, someone not familiar with the assumed law a natural variability might think that the relationship between the northern and southern hemispheres is a tad more complex. On a "global" scale, internal variability appears to only be causing about 0.2C of the fluctuation, but since the fluctuations by hemisphere are out of phase somewhat, I would think knowing whether that is the norm or just happenstance caused by the period selected for the analysis could be a little bit important.
CO2 forcing, especially over land surfaces would have an impact on temperature. Adding CO2 would increase temperature and adding energy would increase temperature which CO2 would amplify. The plot above is consistent with the basic physics, but how would you determine what causes what?
Well despite the assume law of "natural internal variability" there are peer reviewed papers that note:
This kind of haline effect, and the possibility of increased Antarctic sea-ice and land-ice, lead us to conclude that the thermal response to the opening of Drake Passage could have been fairly abrupt
and quite large, perhaps as large as the 4–5°C cooling seen in palaeoceanographic observations.
Since the authors of that paper have also noted that "this kind of haline effect" can have irregular periods as long and longer than the 161 years of which ~21 are not used due to the methodology (filter width) and 15 year questionable due the "shift" to fit, Dr. Pratt's poster is a fine example of curve fitting that has no meaning unless "global average" is the correct metric to use, the assumed "law of natural internal variability" being inconsequential is valid and the data selection and length are a reasonable representation of the "norm", other that that, Pratt's poster rocks.
What is interesting in the other "fit", is that "skeptics" assume that the poster represents some final nail in the coffin of CO2 dominated climate discussion and has to be retracted. It is not, it is just another tool in the tool box to use for honing in on the degree of CO2 influence on climate. Pratt's fit, fits nicely into the range of comfort one would expect if one assumes that natural internal variability is close to negligible. The question is, is it?
Wednesday, January 2, 2013
Skeptical Greenhouses
I was surprised to see two skeptic blogs describe the "Greenhouse Effect" in their first post of the new year. Dr. Roy Spencer started it with Misunderstood Basic Concepts and the Greenhouse Effect which was commented on by Lubos Motl Greenhouse Effect doesn't contradict any laws of physics.
One of the misunderstood concepts is that sunlight is required for the greenhouse effect. That is true. The "Greenhouse Effect" doesn't depend on the source of the heat, just that there is heat and the atmosphere is a barrier to that heat loss.
My add to this would be that the more uniform the source of the heat the more efficient the "greenhouse effect" would be.
Think about a round room versus a square room. If you place a single source in the center of each room, the round room would be more uniformly heated than the square room simply because of the geometry. If you placed the heat source against a wall in either, the round room would still be more uniformly heated. Since the square room would have corners, if you placed the heat source in a corner, that would be the most inefficient location to place the heat source. If the insulation is very good, location would matter less.
To determine the temperature at a given place both the incoming and outgoing energy must be considered. Also true.
Consider the round and square rooms, if one section of either has less insulation value and the heat source is closer to the less well insulated section, the temperature of both rooms would be different than is they where equally well insulated.
Both points agree with the second law of thermodynamics. Of course.
Infrared adsorption and infrared emission are almost always different from each other for a given radiant layer. Without a doubt, an ideal radiant surface is an idealized construct, a useful tool, nothing more. The thinkness of a radiant surface would have to be "tuned" to the perfect optical depth for each frequency range to approach "ideal" and there could be no convection, advection or scattering meaning both the volume above and below the radiant layer would have to be isothermal. As a tool, it is only useful if you remember its limitations.
Radiation going up and radiation going down from a radiant layer aren't equal either. Because of the last condition this has to be true.
The existence of the lapse rate itself requires the "Greenhouse Effect". Of course, an ideal black body and and ideal grey body do not exist, but they are useful "models" to compare to reality.
All these are pretty much points I have made in the past. In fact, my personal estimate of the impact of a doubling of CO2 is lower than the "no feedback" sensitivity because it would required near "ideal" conditions to produce the full impact of that estimate of sensitivity. There is always some degree of inefficiency that bites ya in the butt, based on Murphy's Law.
The first two point I used a simple comparison of two rooms. If you consider that the "walls" of the rooms are moist air, you would understand more of the points I have attempted to make. Now that the material of the wall construction is known, what is the shape of the "room" and where is the heat source placed? Figure that out and you can use any dry radiant layer to finish solving the problem, if you can get the room to stop spinning.
One of the misunderstood concepts is that sunlight is required for the greenhouse effect. That is true. The "Greenhouse Effect" doesn't depend on the source of the heat, just that there is heat and the atmosphere is a barrier to that heat loss.
My add to this would be that the more uniform the source of the heat the more efficient the "greenhouse effect" would be.
Think about a round room versus a square room. If you place a single source in the center of each room, the round room would be more uniformly heated than the square room simply because of the geometry. If you placed the heat source against a wall in either, the round room would still be more uniformly heated. Since the square room would have corners, if you placed the heat source in a corner, that would be the most inefficient location to place the heat source. If the insulation is very good, location would matter less.
To determine the temperature at a given place both the incoming and outgoing energy must be considered. Also true.
Consider the round and square rooms, if one section of either has less insulation value and the heat source is closer to the less well insulated section, the temperature of both rooms would be different than is they where equally well insulated.
Both points agree with the second law of thermodynamics. Of course.
Infrared adsorption and infrared emission are almost always different from each other for a given radiant layer. Without a doubt, an ideal radiant surface is an idealized construct, a useful tool, nothing more. The thinkness of a radiant surface would have to be "tuned" to the perfect optical depth for each frequency range to approach "ideal" and there could be no convection, advection or scattering meaning both the volume above and below the radiant layer would have to be isothermal. As a tool, it is only useful if you remember its limitations.
Radiation going up and radiation going down from a radiant layer aren't equal either. Because of the last condition this has to be true.
The existence of the lapse rate itself requires the "Greenhouse Effect". Of course, an ideal black body and and ideal grey body do not exist, but they are useful "models" to compare to reality.
All these are pretty much points I have made in the past. In fact, my personal estimate of the impact of a doubling of CO2 is lower than the "no feedback" sensitivity because it would required near "ideal" conditions to produce the full impact of that estimate of sensitivity. There is always some degree of inefficiency that bites ya in the butt, based on Murphy's Law.
The first two point I used a simple comparison of two rooms. If you consider that the "walls" of the rooms are moist air, you would understand more of the points I have attempted to make. Now that the material of the wall construction is known, what is the shape of the "room" and where is the heat source placed? Figure that out and you can use any dry radiant layer to finish solving the problem, if you can get the room to stop spinning.
Subscribe to:
Posts (Atom)