New Computer Fund

Friday, December 27, 2013

Basic Climate Model Resolution for the New Year

New Years resolution are great!  No one, even yourself actually expects you to keep them so you can start your year out on a nice warm and fuzzy rationalization note.  Nothing quite as comforting as a rationalization.

Since I started the "Trash" Blog, which is just a place to post thoughts and crazy ideas, I have been pondering what is the minimum required for a reasonable climate model.  Now that I have a better idea of what that requires it is just as good a time as any to jot those thoughts down.

My minimum requires three meridional zones, 90S-42S, 42S-42N and 42N-90N, with the actual pole limit realistically closer to 75 ot 85 N/S due to issues with past instrumentation.  For the atmosphere there needs to be at least three vertical zones, Sea Level, Atmospheric Boundary Layer (-3000 meters) and Stratopause (~50km) each with the three meridional zones.  These need to be match with three ocean zone, 42S-42N theromcline (~15 to 20 C degrees or the 100m bulk layer), second thermocline (~8C or the 100m-700m ocean layer) and the third thermocline (~4C or the 700m to 2000m ocean layer).; 

That would be a fairly complex model with 9 atmospheric boxes and 9 ocean boxes.  The biggest problem would be the ocean boxes which have very poor coverage and a high margin of error.  
These layers don't change very quickly so that can possibly be worked around.

The final model would be very similar to the last version of the Static Model that seems to have gotten close to predicting the impact of the Stratospheric Brewer-Dobson Circulation impact.  Now that I am more familiar with the "Climate" jargon stolen from weather guys and made up on the fly, I may even be able to communicate with some of those guys.

The biggest stumbling block has been latent and sensible heat transfer which always has to option of heating to space, land or returning to the oceans.  The choice of zones is partially based on that problem.  Since 42S-90S and 42N-90N have a huge difference in land/ocean ratio, teasing out some of the latent/sensible transfer destination might be simplified using SST and Tmin versus Tmax.  Tave appears to be close to useless without considering the maximum and minimum components.

This 18 box model will be far from the final version.  It will need to be expanded zonally requiring 36 to 54 zones which I think could be a great resolution for next year. 

Now that I have that out of the way, don't expect much because this is just a resolution which no one ever really keeps :)


Wednesday, December 25, 2013

Impact of Asymmetry on Forcing Correlations

Food for thought for the faithful that have written off Solar forcing, asymmetry and huge thermal inertia can do weird things.  One is that volcanic impacts can appear to lead forcing and another is the same thing can happen with solar forcing.

When you compare the Svalgaard solar to the Climate Research Unit diurnal temperature range anomaly differences between the hemispheres the is a hint of something but not really much to write home about.  Even Lief Svalgaard notes that sun spot number derived Total Solar Irradiance (TSI) is weak and questionable.  Sun spot counts do vary and the actual cause is still a bit of a mystery.  There is some relationship between SSN and climate in most "Eyeball" comparisons, but when you try to quantify the impact, there is never much that is definitive. 

One of the biggest mysteries is the lead/lag relationships.  Earth climate obviously cannot cause Solar sunspot variation and there is no consistent internal lag that correlates well enough to impress linear minded analysts.     There is so much going on the the complex ocean energy transport mechanics that solar/climate links can be extremely frustrating.  Most of that is likely due to the common orbital forcings that due to differences in the internal heat transport of the Earth and the Sun cause similar but different responses to each object.

Earth has common internal response times that are not fixed.  Variations in surface winds and polar sea ice extent vary the internal poleward heat transport.  There is a roughly 28 month settling time for meridional imbalance that leads to the Quasi-Biennial Oscillation and a roughly 8.5 year bulk ocean mixing layer "charging" time constant for Earth versus a 9.5 to 12 year solar pseudo-cycle associated with the Sun's changing magnetic field.  Tidal forcing, even very weak forcing, can alter the mixing efficiency of both the Earth and the Sun as they seek meridional "equilibrium" only to have the weak forcing change.  Both are constantly "hunting" for some happy medium they can never obtain or maintain if they did.

Using the "normal" concept of "forcing" and "feedback" that emphasizes energy balance in purely radiant or thermal terms, the mechanical forces that actually move the thermal mass  tend to be ignored or minimized when they have a much larger impact that deserves more respect.  Just because a correlation doesn't make "sense" doesn't mean it doesn't exist.  It just means the analysis has gaps that need to be filled.  The impact of asymmetry on the response to a common weak forcing is a much more interesting puzzle that assuming it away will never solve.  

Perhaps this new year will lead to new approaches to learn instead of assume. 


Saturday, December 14, 2013

So What is so Hard About Polar Heat Transfer?

Everything.  The poles have what should be called dynamic insulation, the polar vortexes.  A stable polar vortex limits the amount of energy that is lost to space via the super cold high Arctic (Antarctic) temperatures.

An atmospheric vortex, like tornadoes, hurricanes, dust devils or a massive polar vortex is sustained by energy flow.  Because of the Coriolis Effect, out flowing energy produces a counter clockwise rotation in the Northern Hemisphere and clockwise flow in the Southern Hemisphere.  If the energy is flowing in, the rotation can reverse. Since most vortexes maintain the "normal" spin orientation, nearly all are a result of energy loss from the surface to the atmosphere and beyond.

The Antarctic Polar Vortex (AnPV) is much more stable than the Arctic Polar Vortex (APV) thanks to higher altitude land and ice at its base than the APV which has less stable sea ice at sea level.  Change or reduce the direction or rate of energy flow and the vortex will break down, generally into smaller and more numerous vortexes.  These break downs change or are changed by the heat flow characteristics in the region of the vortex.  A hurricane is stronger over fresh water than salt water because there is more energy available for transfer.  Tornadoes are stronger and more numerous in wet Springs than dry Springs because the water in the soil provides more energy for transfer.  The APV breaks down more often than the AnPV because the amount and direction of energy flow can change mainly in late fall or early winter. 

I did an estimate of the impact of a major APV breakdown that lasted a little over two months and the total energy lost was on the order of 1x10^22 Joules or about as much as the "unprecedented" rate of annual ocean heat uptake.  When there is a breakdown, it is caused by or causes a Sudden Stratospheric Warming (SSW) event.  Since the root cause is the rate of energy transfer, it can be difficult to tell what caused what.  Arctic sea ice that is unstable or thin increases the odds that the inflow of energy at the base of the vortex can abruptly change.  Heat transfer from lower latitudes can impact the direction and rate of energy flow causing the breakdown.  Combine the two and you have an excellent puzzle.

Vortexes are considered to be chaotic because they are difficult to predict in size, strength, location and just about any other metric you can think of.  Hurricane intensity models by far suck more that hurricane travel models because it is not easy to model a vortex.  Since vortexes have a huge impact on "global" climate, it stands to reason that "climate" prediction is going to be more difficult than some would expect.

The polar vortexes are just now being understood for being the prediction PITAs that they are.  Arctic Winter Warming (AWW) is considered by some to be a sign of "Global" warming when it is more likely a sign of "global" cooling, at least on some time scale.  The polar regions can either be in-phase with "Global" temperatures, a sign of warming or cooling or out-phase also a sign of warming or cooling  depending on the situation.  Mixed phase clouds in the Arctic increase the energy available to be lost to space if inside of the vortex or recycled to the moist atmosphere is outside the vortex.  Since the water under sea ice is warmer than the air above it, breaks or open areas in the sea ice increase the energy available for transfer.  The vortex can break down with too little, too much or the wrong direction of available energy flow. 

In the 1970s there were fears of "Global Cooling" because the Northern Polar Jetstream shifted.  The stability of the APV is one of the factors that determines the shape of the Jet stream which is also a very large vortex.  When the jetstreams are stable, as in confined to a narrower latitude band there is generally warming.  When the jetstreams become unstable and wander into lower latitudes there is generally cooling as occurred in the 1970s.

Some think that warming in one location is offset by cooling in another location resulting in a net zero energy loss/gain.  Nothing could be further from reality.  The energy per degree of temperature changes so warming in the highest latitudes is almost always a sign of net "global" energy loss.  Warming inside the dynamic boundaries created by the polar/jetstream vortexes results in an increase in energy retained.

Think of the polar vortexes as Air Curtains or Air Doors that help retain energy. 


Monday, December 9, 2013

Why be Skeptical?

This is a quickie for curiousnc, obviously an alias, that wonders how to confront the more devout minions of the Great and Powerful Carbon.  First, there is nothing "anti" science about being skeptical.  There is actually more that is "anti" science about being a naive believer of everything "science" like it is some replacement for a lost divinity.  Scientists are people and every very smart people screw up.

CO2 is a greenhouse gas because it has a C, Carbon with two Os, Oxygen.  That three atom structure allows more motion or vibrational states which can store energy.  It doesn't store energy very long, but since it can store more energy it is more likely to absorb and emit photons of energy that "fit" the different energy states the configuration of the molecule allow.  More atoms, more energy states, generally.  H2O is a greenhouse gas as well as CH4, O3 and a number of other gases.  It is "estimated" that a doubling of CO2 and equivalent gases will produce approximately 3.7 Wm-2 of additional resistance to atmospheric heat loss.  As far as CO2 equivalent "dry" gases go, there is not much of an argument about the physics of the process.  There is a great deal of uncertainty and debate over how much "surface" temperature increase a doubling will cause.  The chart above show dT(CO2) 1 or how much CO2 all by itself "should" impact climate with a variety of "surface" temperatures.  The CO2 impact and all of the temperature series are adjusted to the same 2000 to 2012 baseline.  This allows you to back track what CO2 should have caused. 

You will notice that the dT(CO2) 1 C per doubling of CO2 equivalent concentration just hits the peaks of the various "surface" temperatures.  Looking at the impact this way you would not be including volcanic, solar and other possible "cooling" impacts that could impact your estimate.


This chart uses dT(CO2) 3 C per CO2 equivalent doubling that used to be the mid-range impact expected due to a doubling of CO2 equivalent greenhouse gases.  This curve barely hits the minimums of the "surface" temperatures.

This is the same chart with dT(CO2) 4.5 C per doubling of CO2 equivalent gases the former high range of climate sensitivity.  If there was no long term delayed impact of CO2 forcing, then this estimate indicates that the previous ice ages are what is the desired "normal" temperature for our planet Earth "surfaces".  The minions of The Great and Powerful Carbon prefer to use the 3 and 4.5 C per doubling impacts even though the "generally" accepted range of estimate impact has decrease to 1.6 C per doubling and appears to be headed to the 1 C per doubling or less because the Human scientists over estimated the impact and really don't care to admit it. 

That is why some are skeptical. 

Saturday, December 7, 2013

Just for Fun - Sea surface Temperature Versus Lower Troposphere

If you were throwing billions and billions of tons of CO2 in the atmosphere knowing that CO2 does have some impact on the energy containment of the atmosphere you might suspect that you could measure the surface temperature and the temperature of the lower troposphere and "see" some impact that would let you know right off the bat how much damage you are doing.


Reynolds Optimally Interpolated SST is available on the KNMI Climate Explorer website along with the Remote Sensing Systems (RSS) Lower Troposphere data in actual temperatures.  Reynolds in degrees C and RSS in degrees K which should mean that you can actually estimate energy based on the temperatures.  Of course there are margins of error that should be considered, but just for grins let's ignore that for a moment.  Above I have plotted the difference in the two data sets which is degrees C or K depending on you mood.  After using a 27 month three layer cascade filter instead of removing the seasonal anomaly that is what I got.  From 1995 to present the average difference varies 0.128C degrees.  that is pretty impressive, but both are satellites one trying to measure the ocean surface skin layer and the other the "lower" troposphere which based on the average or environmental lapse rate would be 21.2/6.5=3.26 kilometers or just above the marine atmospheric boundary layer at 2.5 to 3 km depending on who you like to reference.  The average temperature of the RSS version of the LT is just below zero C at 272.4K degrees and the 60S-60N SST average is 20.37 C degrees.  The SST is whatever mask that Reynolds uses and the RSS is truly "global" including land and ocean areas according to KNMI.

Using the standard Stefan-Boltzmann Equation this chart shows the converted energy differential between the two series.  I started this in 1982, the first full month of data to show the inverse response from 1982 to 1998.  These were years with Volcanic perturbations which based on the current radiant forcing models should have caused cooling.  The lower troposphere and mid-troposphere should be warming faster than the surface provide CO2 and H2O feedback are causing the climate change as advertised.

Just to make is clear there is an inverse relationship early which appears to become in phase after the 1998 El Nino, this is the plot.

The reason I dug out this data was for the Atmospheric R-value which I have posted on in the past.  That R-Value is about 0.192 and is pretty much limited by the specific heat capacity of the atmosphere and the force of gravity.  The atmosphere can't "store" energy in the normal sense.  It can "gain" energy if the oceans warm and the rates of deep convection and polar advection stay pretty close to the same, but it is not just going to "store" energy.  So the R-value I thought might be of some use until I notice how much imbalance there is between the hemispheres.


So this is the atmospheric R-Value between the Sea Surface and just a little above the MABL which I used in my estimates.  It would be nice to have a much longer data set to play with, but using what we have I thought it would be fun to share. 


UPDATE: In case you were wondering,

Since the coverage at the poles is limited these cover 50 to 90 latitude band though obviously there not actual SST all the way to the poles.  The southern region is rock solid stable and the northern region is trending downward.  R-Value in case you were wondering is in K/Wm-2  which is the same as "sensitivity" since CO2 is basically just adding some resistance to an already resistive atmosphere.  If you invert the R-value you have a U-value which is typically used for lower values.  1/0.237=4.2 Wm-2/K so if I add 3.7 Wm-2 or resistance/insulation then, 3.7/4.2=0.88 or the temperature differential should increase by 0.88K degrees.  This however is for the actual surface to approximate atmospheric boundary layer (ABL) at ~3000 meters.  That surface, the ABL is a new reference layer for a less moist atmosphere and the RSS data provides another reference temperature, ~0 C degrees, you just need another "surface" as a reference for the next atmospheric layer. 

Shaken or Stirred

The first three "real" laws of Thermodynamics are KISS, FOR and ASSUME.  I am using a pair Marquis by Waterford (TM) Martini pitchers with the classic glass stirrers for the KISS law to try yet again to explain the FOR law.

If both pitchers have the same volume of Vodka at the same temperature and both are stirred at the same rate, the "average" temperature of each pitcher will change at the same rate and approach the same temperature.  That temperature depends on the Surface of Reference and the thermodynamic conditions that apply to each potential cocktail. 



If the orientation of the Marquis by Waterford (TM) pitchers makes a difference, I should consider that difference.  In this case the potential cocktails might spill out so unless there is a glass below, there will be loss of precision potential beverage.  Stirring after the beverage is lost doesn't do much good. 

ASSUMING there is actually something in the pitchers, how well the the stirrers are used determines some part of how efficiently and quickly the beverages are ready to serve.  In case you are a Bond fan, you can shake the pitchers instead of stirring. 

Using a common frame of reference (FOR), Keeps the Problem Simple Stupid (KISS) so there is less chance of you making an ass out of you or me (ASSUME). 

Thursday, December 5, 2013

Once More into the Basic Non-Linear Dynamics Breach

Come on gang it is not that complicated.  The Earth is not a rubber ball bouncing around in space.  It is a planet with various fluids at various densities some with latent energy laced phase changes.  You get interesting internal pseudo-oscillations as the planet attempts to find a new happy place.  You can call that happy place anything you like, but it is not really "equilibrium" and it is not really a steady state.  Quasi-steady state works for some, conditional equilibrium for others, but for now let's just call it the happy place.

If I grossly over smooth the various ocean basin sea surface temperatures after removing my estimate of combined atmospheric forcing and compare them to the Oppo et al. 2009 Indo-Pacific Warm Pool reconstruction, also detrended, you see what is called a weakly damped decay curve.  You whack the system hard and it takes time to re-discover its happy place.   The new happy place doesn't have to be the same as the old happy place.   It is a big planet, it will call the happy place shots.

If you ignore its old happy place, you are assuming you are smart enough to know where a 4.5 billion year old planet will find its new happy place and when.  I don't mean this in a bad way, but unless you have error bars as large as the past standard deviations in happy places you will most likely be wrong.  The "average" happy place deviation is +/- 1.25 C degrees for 70% of the planet's watery surfaces. 


Even though the water surfaces are likely the best indicators of future happy places, folks are fond of the "Global" mean surface temperatures so let's take a look.


 If you assume that "climate" has a transient response (TCR) of 2.5 C per doublings of CO2 equivalent forcing your "fit" looks something like the darker blue dT(CO2) 2.5 curve and if you subtract GISS and HADCRUT4 from your TCR you get the two residuals shown.  Follow the dark blue curve to the left and you can see that means you are assuming that something less than about 0.7C degrees below the 1951-2012 mean is "normal".  If you use 0.8 for your TCR you are assuming that about 0.2 C maybe a little lower is "normal" relative to the 1955-2012 baseline. 

If you assume that when the mean of the residuals is zero that you are close to the TCR, then you have a "normal" for 1.88 C TCR of about 0.5C less than the 1955-2012 baseline average.  Note that the 0.8C curve still hits about -0.2C.  0.8C is on the low side of the mythical "no feedback" climate sensitivity based on the estimates of the actual "average" surface temperature, DWLR,  "average" temperature of the world's oceans, the "ideal" black body temperature of Earth, the latent heat of fusion of water and the approximate mean temperature of the marine atmospheric boundary with the average "global" diurnal temperature range throw in for good measure.  In other words, 4C - 334.5 Wm-2 is very likely one of those "strange attractors" the chaos math geeks yak about. 

There may be nothing at all special about 4C, however, to reach 5C will take on the order of 250 to 500 years which some think is more than enough time to resolve a few of the pesky details.  One of those details is just how much precision do you think you can get estimating what a planet is going to do looking for its new happy place?

Was that any better?


Wednesday, December 4, 2013

Simple Circuit Models

Simple RC circuits are great for modeling a bunch of processes.  I haven't touched on this much because I think you need to be careful of what you are actually trying to model before you start going nuts.  But, an example of using a "pull up" resistor RC circuit was mentioned.  A "pull up" or "pull down" resistor is used in logic circuits to make sure that the output voltage is easily recognized as a "yes" or "no".  If you have a 3 volt circuit a "no" could be less than 1 volt and a "yes" greater than 2 volts with between 1 and 2 volts something that is not recognized.  That is just an example, but you need to make sure the signal is not confused due to power supply drift or changes in load which effect the voltage.

A "pull up" would not be my first modeling choice but it has some advantages.  Since R' and R are in series between V+ and V- (the right hand diagram), the current through R' and R would be equal once the capacitors charge.  Then the current through R' and R would be (V+-V-)/(R'+R).  If you happen to know R' then the voltage across R', V(R')=R'V/(R'+R).  The drawback is that if the capacitors are not charged, you need more information.  This is the kind of circuit model you would use if you are sure that there is a change in V+ that is causing most everything.

If you assume that 3.7Wm-2 of "current" will produce 1.5C additional "Voltage" the 1.5/3.7 = 0.40 which is approximately (R'+R).  If you know or think you know the values of the capacitance, then you can work out some time constant for the entire circuit to settle out or reach steady state.

The voltage divider circuit on the left is more my idea of what should be used.  Vref is the temperature at some surface with R'C' a portion of the atmosphere and RC a portion of the ocean.  You don't really know anything other than if the system is in a steady state the V- upper will equal V- lower.  Then you can assume a value for V-, like say 4C degrees which I use in my static models.  Same principle just there are resistors and capacitors.  The with 4C which is 277.15K degrees with an effective S-B energy of 334.5 Wm-2, I can vary Vref and find a range of values.  For example if the "average" ocean surface temperature is 18.5C (291.65K @ 410.2Wm-2) I would have R'=R=~0.19 K/Wm-2, which if the average surface temperature of the ocean was actually 18.5 C and I can neglect that pesky ~0.926 factor in the S-B equation, would mean that 3.7Wm-2 times 0.19K/Wm-2 equals 0.70 C would be the "transient" sensitivity.  If nothing else changes then the "forced" surface at 4C, 334.5Wm-2 would increase to 334.5+3.7=338.2Wm-2 which would be 4.75C requiring the lower surface to respond over however long it takes to charge the lower capacitance to 4.75C degrees with the same caveat about the pesky ~0.926 in the S-B equation.

18.5C is a fair estimate for the "average" SST unless you try to include Sea Ice Surface (SIS) in you estimate.  Also I noted in the static model that the SH ice free SST is closer to 17C and the NH SST is closer to 20C meaning there are two lower capacitors to consider and the largest of the two will win.  There is a small difference in the R-value using the 17C temperature as Vref, but the big difference is time.

 My thoughts on this stays with the ice free oceans which have a moist atmosphere with an average dew point temperature of approximately 4C degree meaning that the marine atmosphere would be likely to have clouds, saturated to super-saturated water vapor and a higher moisture content in general.  With the 18.5 C temperature (~410Wm-2) and the "sink" temperatures at ~4C (334.5Wm-2) that difference is 75.5 Wm-2.  The oceans obviously are not absorbing 75.5Wm-2 to any depth, but the latent energy released is in that ballpark.  This to me implies that simple models are great for ballparks, but with R-Values so low and time constants likely extremely long, that they are not going to prove much to anyone without some time to verify which approach is the better approach.  Still, the inverse of 0.19K/Wm-2 is 5.26 which is close to the saturated moist adiabatic lapse rate and that magic 5.4 multiplier for the Arrhenius CO2 forcing equation.  The 0.4 is more than twice as much, pretty much like the climate models which seem to be over-estimating "sensitivity". 

Tuesday, December 3, 2013

Climate "Sensitivity" for the Instrumental Period

OOPS Spreadsheet error on the BEST Tmax Tmin charts - See comment:  2000 years of Climate is a quick reverse splice to paleo using only one paleo reconstruction for the Indo-Pacific Warm Pool.  Based on that quick reconstruction, "sensitivity" using ln(CO2/CO2ref) as a reference appears to be ~1.6 C degrees.  Only one paleo reconstruction along with BEST and CET temperatures scale to global mean sea level and the Indian Ocean SST was used since the paleo and longer term temperature records just provide an estimate of the mean.  So let's look at some other instrumental data.



Using the dT(CO2) as a reference compared to the ERSSTv3b Northern Hemisphere SST a "sensitivity" of 1.23 C per doubling produces a zero mean for the residual.

The Southern Hemisphere SST requires a "sensitivity" of 1.6C per doubling to zero the residual. Now here is the fun part.

The main perturbation in the residuals bottoms out around 1910.  The less sensitive NH rebounds more quickly and synchronizes with the SH for a peak in 1941 then there is a weak oscillation. 

Comparing with volcanic there is obviously a correlation but the internal mixing is more than just a little complex.  An 8 year lag pretty well lines up the perturbation but the different recovery timings can almost erase an impact and shift the lag. 

With the land data there is another issue related to what exactly does Tave mean.  To use the BEST and CET data for bridge to paleo reconstructions I removed part of the "land" amplification by scaling the data to a common 1955-2011/12 baseline.  People relying on the "land" for a higher climate sensitivity will not like that.  So let's look at BEST Tmax versus Tmin.

Using the 1.23 "sensitivity" based no the northern hemisphere oceans I get this "fit".  The zero mean for Tmax is about 0.15C below what should be the mean and the Tmin zero is about 0.8C above what should be the mean.  Pretty obviously there is a difference between the Tmax and Tmin responses do to "other" things.


Smaller chart below replaced with above due to spreadsheet error.
Comparing the residuals after removing the ~1.23C CO2 reference there is a nice long recovery period that can be due to land based ice melt, land use changes, instrumentation issues, but likely not due to direct atmospheric forcing.  Tmin contributes 50% to Tave land which contribute 30% to the global mean temperatures.  That is ~2C degree since 1950 or 1C impact on Tave and .3C impact on global T mean.  That doesn't include the ~0.15C that may be due to a longer term secular rise in ocean temperatures that may be attributed to the period formerly known as the Little Ice Age.  Not too surprisingly 0.3 C is twice 0.15C which is consistent with "normal" land amplification based on specific heat capacity differences.  Unlike comparing the actual diurnal temperature range, this residual comparison doesn't have a trend reversal in ~1985. Much smaller reversal almost flat. 

For the higher sensitivity fans.  Note: the background Tmin is inverted in the following chart - corrected below.

Focusing on Tmax produces a sensitivity of 2.2 C.  For the natural variability/LIA fans.

Zeroing the Tmin residuals results in a negative 4.1 C sensitivity.   Why? Because there was a spreadsheet error :)  It is actually a +4.1 sensitivity the small chart is replaced with the larger above.

Because Tmin is amplified by other than atmospheric radiant forcing.  By scaling Tmin by 0.5 there is nearly a perfect fit with the 45S-45N SST until 1900 when the reliability of temperature records degrades.  Comparing a Tave to SST where only the Tmax portion is related to atmospheric forcing produces a fruit salad.

So when comparing Instrumental period data to climate "sensitivity" there is more to be considered.

Update:  It took a while to beat OpenOffice into submission but this might be of interest to some of the Stadium Wave fans.

 This is 10 degree latitude band SST using the ERSSTv3b Climate Explorer mask with the "average" sensitivity removed using the 1951-2012 CO2 reference.  I stopped at 45S and 45N since this is the main energy band.  The smoothing is 11 year cascade to get rid of most of the ENSO type wiggles. 

Monday, December 2, 2013

2000 years of Climate - Correlations

This is one of those posts that will grow for a while.  There was a question about how well BEST and CET correlated in the bassakwards climate reconstruction in the 2000 years of Climate post.  The eye and the math can be fooled when it comes to quantifying a correlation.  One big issue is that CET and BEST "Global" have different smoothing due to the different areal coverage and the changing number of instruments included.  You shouldn't smooth any time series before estimating a correlation, but they all ready have been due to the nature of the beasts.  So I think you have to get creative, to a point.

One handy tool is CO2 response.  Since the common baseline period is 1955-2011 due to the inclusion of the Ocean Heat Content data, here I use the average CO2 concentration for the baseline period as a reference then adjust the CO2 "gain" so that the mean of the overall period for CET, BEST and dT(CO2) 1.6 equal.  dT(CO2) is the natural log of CO2 concentration divided by the reference concentration quantity divided by ln(2) times 1.6 which is 1.6C per doubling of the reference concentration.  The correlation of dT(CO2)1.6 with CET is 0.6 and with BEST 0.76 with the reminder that BEST includes process smoothing that CET doesn't.  Both of the time series were treated to a 27 month smooth just to see what is happening plus they where scaled to the common 1955-2011 baseline in the reconstruction.  Lots of caveats, but the 1.6C agrees well with most of the newer "global" climate sensitivity estimates.

There is another consideration needed with using CO2 as a reference.  This correlation uses CO2 concentration alone while there are other greenhouse gases and other influences on atmospheric forcing included in the reference which have to be isolated somewhere probably by someone a hell of a lot smarter than me.  However, until that day arrives, I think that directly comparing CET to BEST more than just by eyeballing is worse than useless.  A better way is to compare individual reconstructions and times series to what should be common influences.  This is quite a bit contrary to "normal" convention since smoothing is used as a tool to maximize correlation to determine the impact of smoothing by other processes, "global" versus regional averaging, paleo deposit times, differences between atmospheric and ocean response and likely a lot more.  The methods need to be tailored to the data meaning that checks and balance have to be tailored to the method if you want to avoid rabbit holes.

This particular method appears to be a keeper, but time will tell.

This first update shows what happens when you include another common influence.

 Using the same CO2 reference including the Crowley and Unterman Volcanic forcing estimate requires increasing the reference CO2 impact from 1.6 to 2.2 C per doubling.  At this point the CET correlation is the same indicating that the CET region thanks to ocean influences has less volcanic impact and that BEST global land has a greater volcanic impact.  Perfectly logical because of the differences in heat capacity.  While my treatment of the C&U volcanic deserves inspection, the 0.6C difference in the reference CO2 impact implies that volcanic forcing had approximately a 0.6 C degree impact on "global" land temperatures.  Volcanic forcing has a very interesting impact on the oceans due to "memory".  In the North Atlantic where the Thermohaline Current (THC) is mechanically forced into the narrowing ocean area, Volcanic forcing can appear to lead or lag response and the initial response can be in phase or anti-phase with expectations.  Anet et al. have a paper in discussion at Climate of the Past on the impact of Solar and Volcanic influences on tropospheric temperatures well worth the time for those interested.  It is in discussion though and the whole subject is liable to be in discussion for some time.  For what I am doing though, Volcanic has different impacts on different time series is more than adequate. 

This second update is more for the confused about correlation crowd. 

This is the CET data in monthly form and with overdone cascade smoothing.  The green CET has a 0.19 which is the correlation with CO2 from the 1753 start to 1910.  The overall correlation is 0.20 so there is not much difference between the first and last parts of the data.  In blue the 0.05 is the correlation up to 1910 and the 0.80 is the correlation after 1910.  80% correlation sounds great but really doesn't mean much.  In yellow is the residual of difference between the dT(CO2) curve and the smoothed CET.  When the mean of the residual is zero, then the ln(CO2) curve should be about as close a fit as possible.  I could use the raw CO2 concentration numbers and get the same correlation but not the zero mean "fit" without some scaling which ln(CO2) curve provides or a scaled anomaly of the CO2 concentration would provide.  To get a higher correlation I just need to increase the slope.  So if I cut off the "pause" at the end of CET, I would get a better correlation and a slightly higher "sensitivity" to CO2.  I could also change my correlation start date to a deeper valley like ~1890 and get a better correlation and higher "sensitivity". 




Using a shorter 30 year correlation period to compare CET to BEST, the average 30 year correlation is about 36% with highest correlations at perturbations.  That can give you an idea how strong a perturbation was withing the limits of the data of course.  There is a relaxation after each perturbation which is more interesting to me than the correlation. 

Sunday, December 1, 2013

2000 years of Climate

With all the climate reconstructions available I thought I would try one with a little bit different approach.  There is only so much instrumental data available and most of that is in the northern hemisphere on land of course, but there is a good bit of ocean data.  After comparing regions of the ocean, the Indian Ocean basin has about the best correlation to "global" surface temperature, "Global" mean sea level and "Global" ocean heat content.  So I started with the Indian Ocean basin, 60S-30N by 20E-147E masked using the KNMI Climate Explorer ERSSTv3b SST data that starts in 1854.  That data is my reference.

The Church and White Global Mean Sea Level (GMSL) data is available at the Colorado University Sea Level research website along with the current satellite data I used to extend the Church and White series to 2013.  The Leviticus et al. Ocean Heat Content (OHC) data is available at the National Oceanographic and Atmospheric Administration (NOAA) ocean data center making up the main core of the reconstruction.

The OHC is only available from 1955 in quarterly format which I interpolated to monthly just to have a common format.  Using the common baseline period from 1955 to 2011 I scaled the GMSL and OHC data to match the ERSST masked Indian Ocean trend for the common baseline.  Then since there is the Berkeley Earth Surface Temperature (BEST) data available at their website, I figured I would scale that in as well.

This is a chart of the baseline period and how the linear regressions line up. 



Once that is done the full monthly combination back to 1743 looks like this.  I took the liberty of infilling some of the missing BEST data with simple interpolation just to get all the data available involved.  However there is more instrumental data out there that extends even further back in time, the Central England Temperature (CET) series which since England is located near the termination of the Gulf Stream, should have some correlation with OHC and GMSL.  That data is available at the MET/Hadley Center website.  Using the same baseline period, the CET was also scaled and allowed to joy the fray.

As you can see the CET data is quite noisy since it is a smaller area and you can see that as you go back in time the BEST data also becomes quite noisy.  So I used a simple 27 month moving average to remove some of the noise in this next chart.

The correlation between CET and BEST is not fantastic, but they do share some common features.  The ultimate goal of all this is to combine instrumental data with the Oppo et al. 2009 Indo-Pacific Warm Pool reconstruction available at the NOAA climate data center website.  Scaling all the instrumental to match a paleo reconstruction might be a bit bassakwards, but it is interesting.

The Paleo fit is really not bad in my opinion considering has natural and a little anthropogenic smoothing involved.  Not perfect by any stretch but not too bad.

The IPWP data is not scaled since it should relate to the Indian Ocean SST data and in this Redneck version of the hockey stick the mean of the 1955 to 2011 instrumental period is about 0.18 C greater than the mean of the 2000 year IPWP reconstruction which includes some indication of the period formerly known as the Little Ice Age.  There is considerable uncertain that I am not even going to bother estimating at this time since the error margins of the BEST and CET data are available for those wishing to check.  Paleo reconstructions can look a bit different working back from the "Good" data to the questionable data.

Thursday, November 28, 2013

Stop Assuming so Much

When I mentioned that the AMO and PDO are defined oscillations best used for weather not climate I feel the deer in the highlights look.  The AMO and PDO are effects.  Something causes them on longer time scales.  If all you are concerned with is weather patterns, they are fine, but if you are trying to predict climate, without even knowing what time scale is best or climate, you don't just assume things are fixed oscillations. 


Ocean heat transport is the "cause" of the oscillations.  The chart above is the percentage ocean by latitude.  65N has the least ocean and most land so it is a ocean heat transport "choke" point.  The Thermohalide circulation and Coriolis effect along with equator to pole temperature gradients pump energy poleward.  The choke point limits that transport amplifying the impact of ocean heat content in that region. 

This chart compares the 30N-70N ERSSTv3b ocean surface temperature with the BEST land only data for the same region.  The BEST data is scaled by a factor of 0.24 in order to match the trends of both data sets. 

Fans of the AMO will have noticed how similar the 30N-70N SST looks.  This compares the Kaplan AMO with the 30N-70N SST and the yellow curve is the difference between the two. 

Surprise, surprise, the difference bears a remarkable resemblance to the Pacific Decadal Oscillation which has to be scaled since it has been defined as a weather oscillation based on the Aleutian Low.

They are not perfect fits, but the AMO combined with the PDO properly scaled pretty much replicate the 30N-70N SST.  If you have the 30N-60N SST though, why do you need to replicated it with couple of combined weather pseudo-oscillations? 

Since the AMO and PDO are the results of changes in ocean heat transport, not the causes of ocean heat transport it is a little twisted to consider either "causing" climate to do anything.  They do impact weather which really should be considered a different subject. 

Tuesday, November 26, 2013

The Atlantic Multi-decadal Oscillation Misconceptions

Vaughan Pratt, one of the more qualified commentators on the Climate Etc. blog made another one of those comments that just mystify me, that is impact on the AMO is only +/-0.1 C degrees.  The AMO is an oscillation by definition not design that is detrended for easy illustration more than anything else.  The 0.1 +/- C is a result of the use not the phenomenon.

Once you take an anomaly after detrending your mean or average is locked in place then with a little extra smoothing like annual anomaly you pretty much have lost all of the reality of the "thing" being used to create the oscillation index.  This is a rough mask of the North Atlantic from 20N-70N and latitude 20E-90W.  The average anomaly in orange has all of the seasonal signal and the anomaly in blue has the average seasonal cycle for the full period from 1854 to 2012 removed.  The average of the full period is 16.2 C degrees and the median is about a half degree lower at 15.7C degrees.  The comparison seems to indicate that the "AMO" signal drifts about +/- 0.5 C which is about 5 times larger than Dr. Pratt's +/-0.1 C degrees.  Dr. Pratt appears to have underestimated the impact of the "AMO" because of the smoothing methods he is trying to explain and the general confusion over what is and is not a climate "oscillation".

Since Redneck's aren't statisticians or logicians in the formal sense, all I can do is question the "common sense" in assuming something is "normal" and negligible when from the looks of it the formal statisticians and logicians seem to have underestimated the potential impact by a factor of 5.

Dr. Pratt does seem to be more impressed with the Pacific Decadal Oscillations (PDO) because that has a larger swing so by his logic it can have more impact.  The PDO is based on how North Western Pacific fish stocks respond to climate oscillations which varies more in the North Western Pacific than it the whole northern Pacific.  Same basin range of fluctuation just a different region picked out for different fish.  If he looked at the entire northern oceans from 20N to 70N he would find that the whole shebang fluctuates pseudo-cyclicly.




I used both axis to highlight things with this one for the northern oceans.  It has been noted that the northern hemisphere with a larger percentage of land tends to amplify temperature changes which some seem to think indicates that it's "worse than it looks" because they assume they know what "normal" is supposed to be pretty much like they assume that defined oscillations mean more than they are supposed to mean. 

I have to admit though that thanks to Dr. Pratt and Greg Goodman I now know how to smooth the crap out of any time series. 

Sunday, November 24, 2013

A Guest Post on Volcanoes? I Don't think So.

If you take the "standard" volcanic forcing estimates and try to isolate the impact of volcanoes on climate you will find that things just don't add up.  That is because the concept of a "standard" forcing is flawed.  Without going into a great deal of math, try to clear you mind and think of what you have sans all the theories.

You have a ocean with a sea surface temperature that is always greater than the average temperature of the oceans.  You have an average temperature of the oceans which is about equal to the land surface temperature.  You have a northern hemisphere which is about 3 degrees warmer than the southern hemisphere.  If you change how well the sea surface temperature mixes with the deeper oceans or divides itself between the hemispheres you will change the "average" surface temperature.  There is no "forcing" required in the radiant physics sense, just changes in the mechanical mixing efficiency.  A small imbalance in "forcing" can have a greater impact than a larger "global" "forcing" change.  It doesn't matter if that forcing is positive, negative, due to volcanoes, the sun or unicorns, imbalances will always have a greater impact on shorter time scales than uniform forcing. 

Why?  Because a "uniform" forcing reduces the potential of imbalances, decreasing the mixing efficiency actually slowing the rate of warming.  Since the Earth land and oceans are not symmetrically distributed around either the equatorial or polar planes, there will always be some imbalances and temperature gradients, uniform forcing just serves to reduce the degree of imbalance. 

My writing a guest post on "Volcanic Direct and Indirect Effects on Climate" would be a complete waste of time because the chosen radiant frame of reference doesn't allow the communication of the basics completely ignored by radiant physics based climate theory.  For "global" warming in the radiant sense everything "globally" would warm at the same rate, slow as molasses.  One full overturning of the oceans takes on the order of 1700 years.  It would take an "average", based on the limited data available, of ~316 years for the "globe" to warm 0.8C.  By using land based and surface skin measurements warming can "appear" to be greater, but once you back out the internal pseudo-oscillations, about 316 years per 0.8 C degrees.

This chart shows the combined volcanic and solar impact on degree latitude bands of the oceans using the ERSSTv3b data downloaded from KNMI Climate Explorer.  You can see the complex recovery paths of each band with the Northern Hemisphere having the fastest recovery producing an overshot of the mean and the combined oceans regions "hunting" for a new "equilibrium" or quasi-steady state condition.  The only spot the bands are even close to being in synch is during the 1910 period.  The most interesting of all the bands is the 65S to 55S band which has the highest mixing efficiency.  The strongest responses are related to the lowest mixing efficiencies with 35N-45N and 45N-55N located at ocean heat transfer choke points being the strongest.  That choke point causes the land surfaces in that region to amplify the impact of the reduced mixing efficiency resulting in land temperatures being amplified by ~1.8 times the SST change.  Some portion of that amplification is likely due to "other" causes, but without the choke point, land warming would be more uniform. 









The impact of the change in mixing efficiency has been highlighted in a number of papers focusing on climate of the past.  In their paper, On the Relative Importance of Meridional and Zonal Sea Surface Temperature Gradients for the onset of of Ice Ages and on Pleocene-Pleistocene Climate Evolution, Brierley and Fedorov estimate impacts of 3.2 C and 0.6C for respective impacts.  It is not like the information is not out there, it is that the radiant "uniform" forcing models are in conflict with the reality.

To "explain" how imbalanced forcing can both warm or cool depending on region and timing requires a audience capable of listening, not an audience wedded to a failing theory. 

Saturday, November 23, 2013

Just for Fun - Battle of the Surface Temperature Reconstructions

The Berkeley Earth Surface Temperature (BEST) program supposedly has a "global" combined land and ocean temperature series that is ready but just getting some last minute tweaks and reviews.  I have been looking for it to hit the news but I keep getting tired of waiting.  GISS land and oceans surface temperature appears to have a baseline/seasonal cycle selection issue that I have been wanting to see how much it might impact trends, especially the "pause".

The difference should not be much, less than the stated error margin, which normally would be no big deal.  Since climate change is a political hot potato though it seems every milliKelvin is a battle ground.  So I built my own simple "Global" surface temperture record using the full baseline and seasonal cycle for periods where both hemisphere actually had data. 

Tah dah!  As advertised there is not much difference since GISS loti uses the same "global" ocean data ERSSTv3b and what difference there is is mainly near the end where long range interpolation used by GISS might tend to over emphasis Arctic Winter Warming.  BEST uses kriging which should be more reliable than simple interpolation provided you avoid using unicorns in the sky for a reference.

In a previous post I showed how the inclusion of the Antarctic data had added to the variance of the southern hemisphere.

I had also shown the difference in the northern hemisphere which I suspected was due to Arctic Winter Warming.  So now I have just upped the accuracy a tiny bit by removing the seasonal cycle from both the land BEST and ocean ERSSTv3b data an baselined to the entire period which is supposed to be the way it should be done.  This kind of sucks though because every year the entire reconstruction would need to be adjusted to the newer, longer baseline. 

Since I used the actual temperatures instead of anomaly I also have a "global" land and ocean diurnatal temperature range.

And there is the "Global" Tmax and Tmin with all its seasonal cycle glory. 

While I am pretty sure that my reconstruction is pretty close, it depends on the current actual land/ocean ratio and mine is pretty old, it would be Best to wait for BEST before screaming that GISS might be off by 0.05 C.