Has global warming turned into global cooling? That depends on your choice of time frame. Once upon a time, a 15 year trend was enough to justify calling something a trend. Now 17 years is the new 15 years, so to be absolutely positive there is a trend there needs to be a 17 year period with a slope large enough to not require a magnifying glass to see. Technically, any shorter time period that 15 years is subject to being called "Cherry Picking".
Here is a quick and dirty tool for picking legitimate or controversial "Cherries". The zero line is no trend anything positive is warming and anything negative is cooling. The shorter Kmart 17 year curves are every possible 17 year trend for the UAH troposphere, not lower, not middle, but the whole enchilada troposphere. The longer Kmart 11 year curves are ... you guessed it, all the 11 year trends for the whole troposphere.
Some of you may notice that the 17 year curves do not indicate any cooling since the early 1980s. You might also notice that around 1988, the trends peaked and since then have not only declined by are converging on a common global trend. Neat huh?
The 11 year trends have gone negative and have also converged into a common global trend. It is almost like they are synchronized.
Here is another Redneck analytically tool that is fancier. Fancy enough for a French name, Target, pronounce Tar - Zjay. This tool is sequential 131 month (11year) standard deviations. Most of the more volatile regions are experiencing a reduction in deviation since a peak in the early 1990s. Some may notice the freaky yellow, orange and blue curves for the tropics, land and ocean. Weird huh?
In case you are wondering, Plain Vanilla is the Whole Troposphere, not lower, not middle, the whole enchilada. Now if climate were the stock market, would you be investing in warming or hedging with a little cooling?
New Computer Fund
Friday, August 31, 2012
Thursday, August 30, 2012
Reduction in Variance - It's the Current Trend
Just for grins, I was playing with the Plain Vanilla UAH data. Plain Vanilla is just the average of the lower and middle troposphere products. Since there is a little issue over whether RSS or UAH has the better product due to some middle troposphere divergence, I averaged the two and hope that reduces any bias that UAH or RSS may have. It comes close, but RSS is still cooler than UAH for the past few years.
Since I have stated that the variance of the data sets is decreasing but noise makes it hard to prove, the chart above is the plot of all Plain Vanilla UAH regions land and ocean , 131 month standard deviations of the detrended data sets. The chart is busy but neat looking. You can see not only that the standard deviation is lower at the end, reduction in variance, but where the majority of the reduction is happening. The dip plateau feature of the tropical data sets is pretty unusual looking.
The top light blue curve is the Northern Polar oceans. After this years sea ice melt, the change in that data set will be interesting.
Since I have stated that the variance of the data sets is decreasing but noise makes it hard to prove, the chart above is the plot of all Plain Vanilla UAH regions land and ocean , 131 month standard deviations of the detrended data sets. The chart is busy but neat looking. You can see not only that the standard deviation is lower at the end, reduction in variance, but where the majority of the reduction is happening. The dip plateau feature of the tropical data sets is pretty unusual looking.
The top light blue curve is the Northern Polar oceans. After this years sea ice melt, the change in that data set will be interesting.
Wednesday, August 29, 2012
Baseline Impact
Temperatures are rising non-uniformly globally. This compares the extra tropics and tropics using the GISS LOTI data from 1941 to 2011 using the full average as a baseline. Using a simple linear regression and extending the X-axis to the year 2080, both the tropics and southern extend would be expect to warm to ~0.75 degrees above the baseline. The Northern extent would be expect to warm more to ~ 1.25 degrees above the baseline.
Using the same data with a 1980 to 2011 average for a baseline the linear regression indicates about the same warming would be expected in the tropics and Southern extra-tropics with more more warming expected in the Northern extra-tropics, ~2.8 degrees of warming instead of 1.75 degrees.
Using the full data series with the1880 to 2011 average as a baseline, then southern extra-tropics amy be expected to warm by about the same 0.75 degrees, the tropics less to ~0.55 degrees and the northern extra-tropics to ~1.1 degrees.
There is obviously a larger oscillation in the northern extra-tropics with a physical explanation, the Northern extra-tropics have more land and less ocean than the rest of the world, so they have lower heat capacity or thermal mass. The northern extra-tropics would be more sensitive to change than the rest of the world.
So will the region of the Earth with the least thermal mass drive the rest of the world to a much warmer temperature or will the more stable southern extra-tropics dampen the impact of the larger swings in temperature in the Northern extra-tropics? Which choice of baseline would be a better indication of the expected change?
Using the same data with a 1980 to 2011 average for a baseline the linear regression indicates about the same warming would be expected in the tropics and Southern extra-tropics with more more warming expected in the Northern extra-tropics, ~2.8 degrees of warming instead of 1.75 degrees.
Using the full data series with the1880 to 2011 average as a baseline, then southern extra-tropics amy be expected to warm by about the same 0.75 degrees, the tropics less to ~0.55 degrees and the northern extra-tropics to ~1.1 degrees.
There is obviously a larger oscillation in the northern extra-tropics with a physical explanation, the Northern extra-tropics have more land and less ocean than the rest of the world, so they have lower heat capacity or thermal mass. The northern extra-tropics would be more sensitive to change than the rest of the world.
So will the region of the Earth with the least thermal mass drive the rest of the world to a much warmer temperature or will the more stable southern extra-tropics dampen the impact of the larger swings in temperature in the Northern extra-tropics? Which choice of baseline would be a better indication of the expected change?
Tuesday, August 28, 2012
Barrel of Holes - Springer's Leaks
If you fill a barrel full of holes with water, all the holes above the water line would leak. With the barrel full of water, there would be more water pressure available near the bottom of the barrel than at the top. The rate of water flow out of the bottom holes would be greater than the upper holes for a given size of the holes. If you plug a hole at the bottom with a higher rate of flow it would increase the flow of all the other holes below the water line more than plugging a hole at the top. This is a good analogy of CO2 forcing in the atmosphere.
If all of the holes were the same size and at the same level, the solution is simple. With 10 holes initially, plugging one hole would increase the rate of flow through the other holes by 10 percent. If you increase 90 percent by 10 percent the result would be 99 percent. There would be less total flow but all the open holes would have greater flow than initially. If there were a constant flow in the top of the barrel with ten holes, closing one would increase the level in the barrel. If the barrel were full initially, then closing one hole would cause the barrel to over flow. If the barrel were not full initially, then the rising water level would increase the pressure on each open hole increase in the flow. If the barrel is large enough, the flow of all the open hole would increase to the original rate of total flow with the barrel having a new higher level.
If you don't know how many holes there were, or how full the barrel was, or what the the rate of flow into the barrel initially was,or how much the barrel can hold, then the problem is more complicated. Now put the barrel in the back of a pickup traveling down a bumpy dirt road. Welcome to climate change.
If all of the holes were the same size and at the same level, the solution is simple. With 10 holes initially, plugging one hole would increase the rate of flow through the other holes by 10 percent. If you increase 90 percent by 10 percent the result would be 99 percent. There would be less total flow but all the open holes would have greater flow than initially. If there were a constant flow in the top of the barrel with ten holes, closing one would increase the level in the barrel. If the barrel were full initially, then closing one hole would cause the barrel to over flow. If the barrel were not full initially, then the rising water level would increase the pressure on each open hole increase in the flow. If the barrel is large enough, the flow of all the open hole would increase to the original rate of total flow with the barrel having a new higher level.
If you don't know how many holes there were, or how full the barrel was, or what the the rate of flow into the barrel initially was,or how much the barrel can hold, then the problem is more complicated. Now put the barrel in the back of a pickup traveling down a bumpy dirt road. Welcome to climate change.
Saturday, August 25, 2012
Degrees of Confusion-Another Modest Proposal
Updated:
One of my biggest questions when trying to determine how much warming the globe has experienced due to the activities of man kind is what should be average? By default, 1951 to 1980 is considered "average" because it was selected as "average" and is used as the base line for determining the impact of the Greenhouse effect.
With the gold standard of global surface temperature reconstructions, using the 1950 to 1980 baseline everyone is familiar with.
This is the same data with 1995 to 2010 used as a base line. There is no difference in the global mean curve, but look at the two hemispheres.
This base line provides a better view of the internal oscillations between the hemispheres. The northern hemisphere with less ocean volume and more land area has less thermal mass so would be more sensitive to changes in forcing.
Another minor issue is that land means above sea level. As elevation increases the thermal mass of the air would decrease meaning the variation in temperature for a given unit of forcing would be greater at higher elevation than at sea level.
This may not be properly accounted for with the satellite measurements versus each other and the surface station data unless the weighting filters of each satellite temperature product perfectly match.
By using a Plain Vanilla Troposphere, the average of the lower and middle troposphere data products, the difference between the RSS and UAH data products is reduced, there there is some drift in the two products near the end of the data.
Working backwards from the more abundant and arguably more accurate data, the view of climate change, changes. This view should reduce some of the increasing uncertainty.
This is a detrended version showing the 1951 to 1980 choice of baseline for determining climate change. This just provides a different perspective of what changed what. The temperature in the tropics had nearly a step drop around 1940 that may have produced the longer term reduction in the northern extra tropics. That is not indicative of man made aerosols or other normally consider forcing, to my knowledge. It is a bit of a mystery.
This is the Plain Vanilla version of the UAH troposphere data, or the average of the lower and middle troposphere products. All of the series were detrended and zeroed for the same series mean. The 1979 to 1995 comparison above shows that the global mean in blue is higher than all the individual regional land and oceans series.
The same data from 1995 to 2012 shows that all the regional series warmed, nothing surprising there.
This plot focuses on the northern hemisphere. The global is in the back, kinda hard to see, with the NH land and oceans in the fore ground. The NH oceans were curving upward starting around 1985 following the El Chichon eruption then was depressed again in 1991 by the Pinatubo eruption. The NH land and oceans stabilized slightly above global mean following the 1998 super El Nino. In 1979, the beginning of the satellite data, the NH was nearly 0.4 degrees C below the global mean. So did CO2 cause the warming or was the NH recovering from some other impact?
One of my biggest questions when trying to determine how much warming the globe has experienced due to the activities of man kind is what should be average? By default, 1951 to 1980 is considered "average" because it was selected as "average" and is used as the base line for determining the impact of the Greenhouse effect.
With the gold standard of global surface temperature reconstructions, using the 1950 to 1980 baseline everyone is familiar with.
This is the same data with 1995 to 2010 used as a base line. There is no difference in the global mean curve, but look at the two hemispheres.
This base line provides a better view of the internal oscillations between the hemispheres. The northern hemisphere with less ocean volume and more land area has less thermal mass so would be more sensitive to changes in forcing.
Another minor issue is that land means above sea level. As elevation increases the thermal mass of the air would decrease meaning the variation in temperature for a given unit of forcing would be greater at higher elevation than at sea level.
This may not be properly accounted for with the satellite measurements versus each other and the surface station data unless the weighting filters of each satellite temperature product perfectly match.
By using a Plain Vanilla Troposphere, the average of the lower and middle troposphere data products, the difference between the RSS and UAH data products is reduced, there there is some drift in the two products near the end of the data.
Working backwards from the more abundant and arguably more accurate data, the view of climate change, changes. This view should reduce some of the increasing uncertainty.
This is a detrended version showing the 1951 to 1980 choice of baseline for determining climate change. This just provides a different perspective of what changed what. The temperature in the tropics had nearly a step drop around 1940 that may have produced the longer term reduction in the northern extra tropics. That is not indicative of man made aerosols or other normally consider forcing, to my knowledge. It is a bit of a mystery.
This is the Plain Vanilla version of the UAH troposphere data, or the average of the lower and middle troposphere products. All of the series were detrended and zeroed for the same series mean. The 1979 to 1995 comparison above shows that the global mean in blue is higher than all the individual regional land and oceans series.
The same data from 1995 to 2012 shows that all the regional series warmed, nothing surprising there.
This plot focuses on the northern hemisphere. The global is in the back, kinda hard to see, with the NH land and oceans in the fore ground. The NH oceans were curving upward starting around 1985 following the El Chichon eruption then was depressed again in 1991 by the Pinatubo eruption. The NH land and oceans stabilized slightly above global mean following the 1998 super El Nino. In 1979, the beginning of the satellite data, the NH was nearly 0.4 degrees C below the global mean. So did CO2 cause the warming or was the NH recovering from some other impact?
Tuesday, August 21, 2012
Comparing Logic
The heart of the climate change debate is pretty much what can we expect from the oceans. There is no proper way to estimate this. That pushes me towards comparing as many ways to estimate the ultimate impact of just CO2 as possible and basically averaging the results. 1C to 1.6 C for a doubling is the average range for "no feed back climate sensitivity." Any other estimate requires some interaction of various climate mechanisms to amplify that impact. Amplify is generally considered to multiply the impact by a factor of zero or above, though in climate science many consider the minimum amplification to be one. Negative amplification is not considered.
Since the range of past climate may be well below today's climate, the initial value of climate has to be correctly considered to allow the assumption of 1 or above amplification. Since CO2 is assumed to only increase forcing or warm, then negative warming, cooling, is a pain in the ass as far as the definition of sensitivity of climate to CO2 forcing is concerned. This causes the range of sensitivity estimates to be quite large. For example, if natural climate variability is 3C then climate sensitivity would have to be 3C plus 1 to 1.6C to meet the requirement s of the definition. That results in a higher end estimate of 4.6C and a low end of 1 C degrees. That is not particularly useful information.
Below is an explanation of an estimate of the ocean heat uptake expected due to increase CO2 concentration by WebHubbleTelescope.
“Luckily, you have all the answers so I don’t have to fret.Fret away. So far Webby is keeping that particular answer to himself.”
Not really. I have it documented elsewhere. Cap’n knows this, but not everyone does (in The Oil Conundrum).
What I will do is solve the heat equation with initial conditions and boundary conditions for a simple experiment. And then I will add two dimensions of Maximum Entropy priors.
The situation is measuring the temperature of a buried sensor situated at some distance below the surface after an impulse of thermal energy is applied. The physics solution to this problem is the heat kernel function which is the impulse response or Green’s function for that variation of the master equation. This is pure diffusion with no convection involved (heat is not sensitive to fields, gravity or electrical, so no convection).
However the diffusion coefficient involved in the solution is not known to any degree of precision. The earthen material that the heat is diffusing through is heterogeneously disordered, and all we can really guess at that it has a mean value for the diffusion coefficient. By inferring through the maximum entropy principle, we can say that the diffusion coefficient has a PDF that is exponentially distributed with a mean value D.
We then work the original heat equation solution with this smeared version of D, and then the kernel simplifies to aexp() solution.
$
But we also don’t know the value of x that well and have uncertainty in its value. If we give a Maximum Entropy uncertainty in that value, then the solution simpilfies to
$
where x0 is a smeared value for x.
$
But we also don’t know the value of x that well and have uncertainty in its value. If we give a Maximum Entropy uncertainty in that value, then the solution simpilfies to
$
where x0 is a smeared value for x.
This is a valid approximation to the solution of this particular problem and the following Figure 1 is a fit to experimental data. There are two parameters to the model, an asymptotic value that is used to extrapolate a steady state value based on the initial thermal impulse and the smearing value which generates the red line. The slightly noisy blue line is the data, and one can note the good agreement.
Figure 1: Fit of thermal dispersive diffusion model (red) to a heat impulse response (blue).
Notice the long tail on the model fit. The far field response in this case is the probability complement of the near field impulse response. In other words, what diffuses away from the source will show up at the adjacent target. By treating the system as two slabs in this way, we can give it an intuitive feel.
By changing an effective scaled diffusion coefficient from small to large, we can change the tail substantially, seeFigure 2. We call it effective because the stochastic smearing on D and Length makes it scale-free and we can longer tell if the mean in D or Length is greater. We could have a huge mean for D and a small mean for Length, or vice versa, but we could not distinguish between the cases, unless we have measurements at more locations.
Figure 2 : Impulse response with increasing diffusion coefficient top to bottom.
The term x represents time, not position .
In practice, we won’t have a heat impulse as a stimulus. A much more common situation involves a step input for heat. The unit step response is the integral of the scaled impulse response
The integral shows how the heat sink target transiently draws heat from the source. If the effective diffusion coefficient is very small, an outlet for heat dispersal does not exist and the temperature will continue to rise. If the diffusion coefficient is zero, then the temperature will increase linearly with time, t (again this is without a radiative response to provide an outlet).
Figure 3 : Unit step response of dispersed thermal diffusion. The smaller the effective
thermal diffusion coefficient, the longer the heat can stay near the source.
Eventually the response will attain a square root growth law, indicative of a Fick’s law regime of what is often referred to as parabolic growth (somewhat of a misnomer). The larger the diffusion coefficient, the more that the response will diverge from the linear growth. All this means is that the heat is dispersively diffusing to the heat sink.
Application to AGW
This has implications for the “heat in the pipeline” scenario of increasing levels of greenhouse gases and the expected warming of the planet. Since the heat content of the oceans are about 1200 times that of the atmosphere, it is expected that a significant portion of the heat will enter the oceans, where the large volume of water will act as a heat sink. This heat becomes hard to detect because of the ocean’s large heat capacity; and it will take time for the climate researchers to integrate the measurements before they can conclusively demonstrate that diffusion path.
In the meantime, the lower atmospheric temperature may not change as much as it could, because the GHG heat gets diverted to the oceans. The heat is therefore “in the pipeline”, with the ocean acting as a buffer, capturing the heat that would immediately appear in the atmosphere in the absence of such a large heat sink. The practical evidence for this is a slowing of the atmospheric temperature rise, in accordance with the slower sqrt(t) rise than the linear t. However, this can only go on so long, and when the ocean’s heat sink provides a smaller temperature difference than the atmosphere, the excess heat will cause a more immediate temperature rise nearer the source, instead of being spread around.
In terms of AGW, whenever the global temperature measurements start to show divergence from the model, it is likely due to the ocean’s heat capacity. Like the atmospheric CO2, the excess heat is not “missing” but merely spread around.
EDIT:
The contents of this post are discussed on The Missing Heat isn’t Missing at all.
The contents of this post are discussed on The Missing Heat isn’t Missing at all.
I mentioned in comments that the analogy is very close to sizing a heat sink for your computer’s CPU. The heat sink works up to a point, then the fan takes over to dissipate that buffered heat via the fins. The problem is that the planet does not have a fan nor fins, but it does have an ocean as a sink. The excess heat then has nowhere left to go. Eventually the heat flow reaches a steady state, and the pipelining or buffering fails to dissipate the excess heat.
What’s fittingly apropos is the unification of the two“missing” cases of climate science.
1. The “missing” CO2. Skeptics often complain about the missing CO2 in atmospheric measurements from that anticipated based on fossil fuel emissions. About 40% was missing by most accounts. This lead to confusion between the ideas of residence times versus adjustment times of atmospheric CO2. As it turns out, a simple model of CO2diffusing to sequestering sites accurately represented the long adjustment times and the diffusion tails account for the missing 40%. I derived this phenomenon using diffusion of trace molecules, while most climate scientists apply a range of time constants that approximate diffusion.
2. The “missing” heat. Concerns also arise about missing heat based on measurements of the average global temperature. When a TCR/ECS* ratio of 0.56 is asserted, 44% of the heat is missing. This leads to confusion about where the heat is in the pipeline. As it turns out, a simple model of thermal energy diffusing to deeper ocean sites may account for the missing 44%. In this post, I derived this using a master heat equation and uncertainty in the parameters. Isaac Held uses a different approach based on time constants.
So that is the basic idea behind modeling the missing quantities of CO2 and of heat — just apply a mechanism of dispersed diffusion. For CO2, this is the Fokker-Planck equation and for temperature, the heat equation. By applying diffusion principles, the solution arguably comes out much more cleanly and it will lead to better intuition as to the actual physics behind the observed behaviors.
I was alerted to this paper by Hansen et al (1985) which uses a box diffusion model. Hansen’s Figure 2 looks just like my Figure 3 above. This bends over just like Hansen’s does due to the diffusive square root of time dependence. When superimposed, it is not quite as strong a bend as shown inFigure 4 below.
This missing heat is now clarified in my mind. In the paper Hansen calls it “unrealized warming”, which is heat entering into the ocean without raising the climate temperature substantially.
EDIT:
The following figure is a guide to the eye which explains the role of the ocean in short- and long-term thermal diffusion, i.e. transient climate response. The data from BEST illustrates the atmospheric-land temperatures, which are part of the fast response to the GHG forcing function. While the GISTEMP temperature data reflects more of the ocean’s slow response.
The following figure is a guide to the eye which explains the role of the ocean in short- and long-term thermal diffusion, i.e. transient climate response. The data from BEST illustrates the atmospheric-land temperatures, which are part of the fast response to the GHG forcing function. While the GISTEMP temperature data reflects more of the ocean’s slow response.
Figure 6: Hansen’s original projection of transient climate sensitivity plotted against the GISTEMP data,
which factors in ocean surface temperatures.
*
TCR = Transient Climate Response
ECS = Equilibrium Climate Sensitivity
TCR = Transient Climate Response
ECS = Equilibrium Climate Sensitivity
Added:
“Somewhere around 23 x 10^22 Joules of energy over the past 40 years has gone into the top 2000m of the ocean due to the Earth’s energy imbalance “
That is an amazing number. If one assumes an energy imbalance of 1 watt/m^2, and integrate this over 40 years and over the areal cross-section of the earth, that accounts for 16 x 10^22 joules.
The excess energy is going somewhere and it doesn’t always have to be reflected in an atmospheric temperature rise.
To make an analogy consider the following scenario.
Lots of people understand how the heat sink works that is attached to the CPU inside a PC. What the sink does is combat the temperature rise caused by the electrical current being injected into the chip. That number multiplied by the supply voltage gives a power input specified in watts. Given a large enough attached heat sink, the power gets dissipated to a much large volume before it gets a chance to translate quickly to a temperature rise inside the chip. Conceivably, with a large enough thermal conductance and a large enough mass for the heat sink, and an efficient way to transfer the heat from the chip to the sink, the process could defer the temperature rise to a great extent. That is an example of a transient thermal effect.
The same thing is happening to the earth, to an extent that we know must occur but with some uncertainty based on the exact geometry and thermal diffusivity of the ocean and the ocean/atmospheric interface. The ocean is the heat sink and the atmosphere is the chip. The difference is that much of the input power is going directly into the ocean, and it is getting diffused into the depths. The atmosphere doesn’t have to bear the brunt of the forcing function until the ocean starts to equilibrate with the atmosphere’s temperature. This of course will take a long time based on what we know about temporal thermal transients and the Fickian response of temperature due to a stimulus.
End of WebHubbleTelescope post. I don't want to mess up the link by changing font. BTW, some of the links are not active so I may revise the links in the future.
WebHubbleTelescope is obviously educated and has considerable math skills. But does his logic past muster?
"For example, if natural climate variability is 3C then climate sensitivity would have to be 3C plus 1 to 1.6C to meet the requirement s of the definition. That results in a higher end estimate of 4.6C and a low end of 1 C degrees. That is not particularly useful information. " Quoting myself, I don't think so. Without allowing for the full range of natural variability, his analysis doesn't really inform. Since Earth's climate system is somewhat chaotic and appears to have regions of bi-stability, impact based on the assumption that any relatively short time period represents "average" is likely flawed. For that reason I spend most of my time attempting to find indications of what "average" could be, in order to determine what the impact of CO2 would be based on what we could otherwise expect from Earth's climate.
Using the past two to three hundred years as "average", Web's analysis is probably right on the mark. If the past two to three hundred years is 1C below what we could expect without CO2, his analysis would be high by approximately 1C degrees. When a complex problem is as sensitive to the choice of initial values as climate, standard methods can produce meaningless results.
Sunday, August 19, 2012
Back from the Future
More Stuff at the End
The choice of base line is critical in climate science. By "cherry picking" a base line you can determine that something is "unprecedented" or not "unprecedented". Welcome to non-linear dynamicsBeing I am a rebel without a scientific clue, I have "cherry picked" my base line as the AQUA quality era from 2002 to 2010. Quality because the Sea Surface channel took a dump about 2010. My hypothesis is that the Glacial cycles are primarily influenced by Wobble. I was told by the real scientific types I needed a theory, so that is my theory, though it does tend to blend a number of other theories. So it is not really all that original. Possibly the only original part is my estimate of the range of bi-stability. I haven't figured out how to make a convincing argument yet, but there appears to be an approximately 2C separation between the low normal and high normal bi-stable SST set points controlled by the fresh water and salt water freezing points. Based on Tierney et al. TEX86 Lake Tanganyika temperature reconstructions and the apparent peak-peak values of Neukum et al. Southern South American temperature reconstructions, the 2002 to 2010 base line appears to be close to the high normal.
There is considerable uncertainty in the paleo reconstructions and the TEX86method is prone to have more potential issues, so high normal range is a touch iffy. Now the paleo fans will know that the Northern hemisphere and Antarctic reconstructions of past temperature show a hellava lot more variation than this chart shows. Looking back at the first chart there are regression lines for RSS northern hemisphere and southern hemisphers. The regression line for the southern hemisphere is a reasonably nice match to the slope of the GISS Land and Ocean Temperature Index (LOTI). That can be an indication of the lag of SST with land temperature change. You can graph the RSS NH with the BEST land temperature product and find that it more closely matches land temperature change.
This is a close up of the Instrumental temperature data with the BEST volcanic forcing estimate replacing GISS LOTI. Using the 2002 to 2010 "cherry picked" base line, there appears to be stronger sensitivity to forcing in the northern hemisphere than the southern hemisphere. Comparing the 1983 and 1991 volcanic pulses, the sensitivity to forcing appears to decrease, especially in the southern hemisphere. Of course, both of those "pulses" where NH volcanoes, but the 1963 "pulse" appears to have had a greater impact of SST pre satellite era. Decreasing sensitivity to forcing "pulses" would be a characteristic of a system closer to a control set point as would the over shoot and decay patterns that if you squint hard enough, you can "eyeball" in the chart above.
Since it is likely that a good deal of the depression below the high normal set point would be natural, CO2 forcing would be most discernible at the upper or lower bi-stable set points. Pretty much explains why Dr. Vaughan Pratt notices a 14.5 year lag in the CO2 forcing that only is apparent in the past 30 years.
Of course, being scientifically clueless, there is a fairly high probability that I am wrong. After all, I am just a guy that has experience measuring temperatures and thermal capacities while adjusting control systems in non-linear systems with regions of instability. It is not like I was silly enough to design them.
More Stuff
I just added the GISS land only and the equations for the RSS trend lines. You can compare the mean value line for the GISS land and HADSST2.
Even more stuff
This is my K-Mart sequential linear regression analysis of the same time series combination. Starting at point A, HADSST2 leads the way with the vertical linebetween A and B showing the lag. At point B there is a shift where HADSST2 decreases but both GISS series continue. Around 1988 there is a reorganization of sorts and all three converge in 1995, what I consider a regime change. Block C highlights the relationships of all series around the 2002-2010 base line period.
To highlight the relationship this plot compares the ration of GISS LOTI and LAND to HADSST2. Since 2002 to 2010 is the common base line, there would be a convergence, but the reduction in variation looks a little stringer than I would expect with the common base line. Changing the base line to 1980 to 1988 just offset the two plots slightly. The change in variance could be due to instrumentation improvements, but I am more suspicious of the 1995 regime change since it is consistent in all data sets.
Friday, August 17, 2012
More Land Use Stuff
Updated below:
With an exponentially increasing population just about every thing increases exponentially. Increasing CO2 forcing will have a natural log impact as would most every other thing that changes heat capacity. With most every thing having the potential to product the same shaped impact curves, picking out what did what is not an easy job.
Dr. Vaughan Pratt, who is working on his own model for Human Caused Warming (HCW) turned me on to a discovery in his yet to be published paper. There is a approximate 14.5 year lag of temperature to forcing change. There are lots of lags in a huge system with unbelievable amounts of heat capacity. So I mentioned that natural variability and land use can cause the same "look" in a plot. Above is a comparison of the HADSST2 sea surface temperature anomaly with the Land Use Carbon Flux from the Carbon Dioxide Information Analysis Center. The data provide was converted to anomaly by first shifting to a 1951 to 1980 base line then dividing by the average for that period. It is not a perfect attempt at determining forcing, just used to roughly scale the data to the HADSST2 data. The yellow plot is a 15 year trailing moving average use to roughly simulate the 14.5 year lag. The carbon flux is not a perfect proxy for land use impact, but it is good enough for my purposes.
One of the regions that appears to be slightly underestimated in the land use flux data is the former Soviet Union. The BEST plot for Kazakhstan above is for one of the regions most greatly impacted by Khrushchev's Virgin Lands Program that start circa 1953. I have seen various estimates for the total land area converted to agriculture use during that campaign with Wikipedia estimating close to 500,000 kilometers squared. The region impacted totals about 6.5 million km^2 in Kazakhstan and Siberia. Before Khrushchev, Stalin and the Tsar also heavily promoted exploitation of Siberia and the 'Stans. The European Heritage Library has delightfully title article, The disastrous human and environmental effects of Soviet collectivization on Kazakhstan
That is what is left of the Aral Sea, once the forth largest fresh water lake in the world now called Aralkum, which pretty much qualifies as an ecological disaster. The Aral Sea is often used as a poster child for Antropogenic Global Warming (AGW). The difference between AGW and HCW is the AGW is generally assumed to be caused by CO2. That disaster was caused by poor governmental planning which used most of the flow to the Aral Sea for irrigation of the 100s of thousand of kilometers of virgin lands converted to agriculture by the Soviets.
It is impossible to deny that warming is in some part caused by human activities, but assuming that CO2 played a starring role is a bit of a stretch when up to 6.5 million kilometers squared in just one nation was convert from natural steppe and forest into corn fields and desert.
This doesn't in any way disprove the CO2 portion of HCW, but it does tend to temper the rational that CO2 caused "most" of the warming. In any case, there is an experiment in progress that may help determine how much is caused by what "forcing".
The new stuff:
Here is a screen capture of GISS temp using 1950 to 1960 as a base line showing some serious spring warming in the general vicinity of the Virgin Lands Campaign. That could be a natural oscillation or a not so natural oscillation.
With an exponentially increasing population just about every thing increases exponentially. Increasing CO2 forcing will have a natural log impact as would most every other thing that changes heat capacity. With most every thing having the potential to product the same shaped impact curves, picking out what did what is not an easy job.
One of the regions that appears to be slightly underestimated in the land use flux data is the former Soviet Union. The BEST plot for Kazakhstan above is for one of the regions most greatly impacted by Khrushchev's Virgin Lands Program that start circa 1953. I have seen various estimates for the total land area converted to agriculture use during that campaign with Wikipedia estimating close to 500,000 kilometers squared. The region impacted totals about 6.5 million km^2 in Kazakhstan and Siberia. Before Khrushchev, Stalin and the Tsar also heavily promoted exploitation of Siberia and the 'Stans. The European Heritage Library has delightfully title article, The disastrous human and environmental effects of Soviet collectivization on Kazakhstan
That is what is left of the Aral Sea, once the forth largest fresh water lake in the world now called Aralkum, which pretty much qualifies as an ecological disaster. The Aral Sea is often used as a poster child for Antropogenic Global Warming (AGW). The difference between AGW and HCW is the AGW is generally assumed to be caused by CO2. That disaster was caused by poor governmental planning which used most of the flow to the Aral Sea for irrigation of the 100s of thousand of kilometers of virgin lands converted to agriculture by the Soviets.
It is impossible to deny that warming is in some part caused by human activities, but assuming that CO2 played a starring role is a bit of a stretch when up to 6.5 million kilometers squared in just one nation was convert from natural steppe and forest into corn fields and desert.
This doesn't in any way disprove the CO2 portion of HCW, but it does tend to temper the rational that CO2 caused "most" of the warming. In any case, there is an experiment in progress that may help determine how much is caused by what "forcing".
The new stuff:
Here is a screen capture of GISS temp using 1950 to 1960 as a base line showing some serious spring warming in the general vicinity of the Virgin Lands Campaign. That could be a natural oscillation or a not so natural oscillation.
Wednesday, August 8, 2012
Plain Vanilla Troposphere
With all the confusion of what data set is accurate and what isn't accurate, it looks like I will go with plain vanilla for a while. Plain Vanilla is the satellite troposphere temperature. Not the lower troposphere or the mid-troposphere, just the troposphere. Above is the comparison of the RSS and UAH USA "Plain Vanilla" troposphere. That would be lower troposphere plus mid troposphere divided by two. A simple average.
The yellow linear regression is 0.0014C per year of 0.01C per decade difference in the trends of the two products. I can live with that. The blue linear regression is 0.0168C per year or 0.168C per decade. That is the rate the USA Plain Vanilla Troposphere has been warming since 1980.
That is a little higher than the 0.155C per decade that Anthony Watts gets with the new high quality surface stations. Again, it is close enough for government work in my opinion.
Those are Plain Vanilla global. The first start in 1995 and shows how RSS and UAH are starting to diverge which may have something to do with the AQUA satellite which is not suppose to drift in orbit significantly.
The yellow linear regression is 0.0014C per year of 0.01C per decade difference in the trends of the two products. I can live with that. The blue linear regression is 0.0168C per year or 0.168C per decade. That is the rate the USA Plain Vanilla Troposphere has been warming since 1980.
That is a little higher than the 0.155C per decade that Anthony Watts gets with the new high quality surface stations. Again, it is close enough for government work in my opinion.
Those are Plain Vanilla global. The first start in 1995 and shows how RSS and UAH are starting to diverge which may have something to do with the AQUA satellite which is not suppose to drift in orbit significantly.
More Cartoon Craziness
The drawing above the the older NASA Earth Energy Budget which was once on Wikipedia. There was a little controversy about this budget since it didn't show the "GREENHOUSE EFFECT". It wasn't intended to show the "GREENHOUSE EFFECT". It was just intended to show the estimated average annual energy flows in from the Sun and back out to space. I added a few numbers based on what is currently the best estimate of what the incoming solar energy is on an annual basis. 341.1 +/-0.1 Wm-2 every second as averaged over the entire 510,000,000 kilometers squared of the Earth's surface. The large red arrow with 64% which I add 218Wm-2 plus the smaller 6% red arrow with 20.5Wm-2 I added, total 238.5Wm-2 which is approximately the total emission of energy from the Earth excluding reflected sunlight. The Earth approximate absorbs the same amount from the Sun.
I added a Blue Box for the atmosphere that absorbs energy both from the Sun and from the surface. Since NASA used percentages, the total absorbed is 19% plus 3% from the Sun, 7%, 23% and 15 % from the surface. The total percentage absorbed from the Sun is 22% or 75Wm-2 and from the surface is 153.5Wm-2, giving a grand total of 228.5Wm-2 which is 10.5Wm-2 greater than the 218Wm-2 I have typed into the Blue Box. That 10.5Wm-2 is a main part of the controversy.
Kevin Trenberth, John Fasullo and Jeffery Keihl in 2009 published an Earth Energy Budget which fairly well agreed with the NASA budget, but list the atmospheric window energy as 40Wm-2 in order to "close" the energy budget. This "closing" is basically sticking something some place to make the books balance.
3 The analysis by Trenberth and Fasullo (2010b) suggested that the radiative imbalance at the top of the atmosphere measured by CERES has been increasing by as much as 1 W/ m2-dec. Because of the limited heat capacity of the atmosphere such an imbalance would imply a large change in ocean enthalpy and/or surface ice amount, neither of which is observed. However, Trenberth and Fasullo (2010b) made use of preliminary CERES data for the time period 2005-2010, over which the striking trend in radiative imbalance was noted, and failed to account for uncertainty in their estimated trend.
That footnote is from, Observing and Modeling Earth’s Energy Flows, by Bjorn Stevens and Stephen E. Schwartz which is a good analysis of the current state of the Earth's Energy Budget.
In my opinion, the controversy is evidence that the system is not being properly modeled. The reason it is not being properly modeled is because the system is a group of several systems. The oceans being the largest and least understood system, is likely the culprit causing the confusion.
The oceans are capable of dissipating energy as mass, ice or water, on to land and regaining that energy years, decades even thousands of years later. Assuming that that is not a natural process with a longer term natural frequency is probably a foolish assumption. With little more than 30 years of satellite data which is improving with each generation, that 10.5 Wm-2 difference in the NASA budget is an honest representation of the state of the science at the time.
That gives the Blue Box in the drawing a new significance. With ~218 to 238.5Wm-2 absorbed by the atmosphere and ~238 emitted from the atmosphere, there is a 20Wm-2 range of uncertainty that could be a range of control. Considering the average ocean surface temperature of 294.25 K degrees with an effective radiant energy of 425Wm-2, that could indicate a range of 415Wm-2 to 435Wm-2 for the average energy range of the oceans. That is roughly a 3.5C degree range of allowable average sea surface temperature.
According to NASA MODIS AIRS Comparison there is a similar measured difference between the "bulk" SST used historically and the "Effective" SST that the satellites see. What the satellites "see" would be what green house gases would "see". This indicates what appears to be a rather complex climate control mechanism for the oceans.
That range is totally within reason if the above 60,000 year temperature reconstruction is accurate. I believe most of that 60,000 years could be considered "natural" variation.
Subscribe to:
Posts (Atom)