New Computer Fund

Monday, March 23, 2015

New papers getting a look

Bjorn Stevens and crew have been busy.  I stumbled on a Sherwood et al. paper that included Bjorn Stevens as one of the als.  The paper concerns some issues with the "classic" radiant forcing versus surface temperature and "adjustments" that should be considered.  If is reviewed and ready for publication but hasn't hit the presses yet.  A biggy in the paper concerns the "fungibility" of dTs which I have harped on quite a bit invoking the zeroth law of thermodynamics.  "Surface" temperature where the surface is a bit vague isn't all that helpful.  Unfortunately, surface temperature is about all we have so there needs to be some way to work around the issues.  Tropical SST is about the best work around, but that really hasn't caught on.

In my post How Solvable is a Problem I went over some of the approximations and their limitations.  I am pretty sure the problem isn't as "solvable" as many would like, but it looks to be more solvable than I initially thought.

Since Dr. Stevens also have a recent paper on aerosol indirect effects, I thought I would review the solar TSI and lower stratosphere correlation.

I got pretty lazy with this chart but is shows the rough correlation which is about 37% if you lag solar by about a year.  It is better when volcanic sulfates are present in the tropics, but I can safely say most of the stratospheric cooling is due to a combination of volcanic aerosols and solar variability.   The combinations or non-linearly coupled relationships are a large part of the limit to "solution".  When you have three or more inter-relationships you get into the n-body type problems that are beyond my pay grade.  You can call it chaos or cheat and use larger error margins.  I am in the cheat camp on this one.

The cheat camp would post up charts like this.

We are on a simple linear regression path and about to intersect another "normal range" so surface temperatures in the tropical oceans should crab sideways in the "normal" range with an offset due to CO2 and other influences.  Not a very sexy prediction but likely pretty accurate.  "Global" SST with light smoothing should vary by about +/- 0.3 C and with heavy smoothing possibly +/- 02 C degrees.
Plus or minus 0.3 C is a lot better than +/- 1.25 C, but with one sigma as error margins there is still a large 33% error window.  So technically, I should change my handle to +/- 0.3 C to indicate an overall uncertainty instead of the +/- 0.2 C, CO2 only claim  That would indicate that I "project" about 0.8 C per 3.7 Wm-2 from the satellite baseline with +/- 0.3 C of uncertainty.  The limit of course is water vapor and aerosols which tend to regulate the upper end.  Global mean surface temperature still sucks, but with the oceans as a reference, it sucks less.  This hinges on some better understanding of solar and volcanic interacting "adjustments" which is looking more likely.

If there are more ocean, especially tropical ocean papers that attempt to estimate "sensitivity" sans the flaky land surface temperature 30%, my estimate should start making more sense.  It is heartening to see clouds being viewed as a regulating feedback than a dooms day positive feedback which took a lot longer than I expected.  Still a bit surprised it took so long to the dTs "fungibility" issue to be acknowledged since that was about my first incoherent blog post topic.  "Energy is fungible the work it does is not" was the bottom line of that post.  Since we don't have a "global" mean surface energy anomaly or a good way to create one, adjusting dTs is the next best route.  Then we may discover a better metric along the way.  Getting that accepted will be a challenge.

With more of the sharper tacks being paid attention to, Stephens, Stevens, Schwartz, Webster etc., this could turn into the fun puzzle solving effort I envisioned when I first started following this otherwise colossal waste of time.


Wednesday, March 18, 2015

How solvable is a problem?

If you happen upon this blog you will find a lot of post that don't resemble theoretical physics.  There is a theoretical basis for most of my posts but it isn't your "standard" physical approach.  There are hundreds of approaches that can be used, and you really never know what approach is best until you determine how solvable a problem is.

One of the first approaches I used with climate change was based on Kimoto's 2009 paper "On the confusion of Planck feedback parameters".  Kimoto used a derivation of the change in temperature with respect to "forcing" dT=4dF, which has some limits.  Since F is actual energy flux not forcing you have to consider types of energy flux that have less or more impact on temperature.  Less would be latent and convective "cooling" which is actual energy transfer to another layer of the problem and temperature well above or below "normal".   The 4dF implies a temperature which has exactly 4 Wm-2 change per degree.  Depending on your required accuracy, T needs to be in range so than the 4dF doesn't create more inaccuracy than you require.  So the problem is "solvable" only within a certain temperature/energy range than depends on your required accuracy.  If you have a larger range you need to adjust your uncertainty requirements or pick another method.

You can modify the simple derivation to dT=4(a*dF(1) + b*dF(2) +....ndF(n+1)) which is what Kimoto did to compare state of the science at the time, estimates of radiant, latent and sensible energy flux.  You can do that because energy is fungible, but you would always have an unknown uncertainty factor because while energy is fungible, the work that it does is not.  In a non-linear dissipative system, some of that work could be used to store energy that can reappear on a different time scale.  You need to determine the relevant time scales required to meet the accuracy you require so you can have some measure of confidence in your results.

Ironically, Kimoto's paper was criticized for making some of the same simplifying assumptions that are used by climate science to begin with.  The assumptions are only valid for a small range and you cannot be sure how small that range needs to be without determining relevant times scales.

In reality there are no constants in the Kimoto equation.  Each assumed constant is a function of the other assumed constants.  You have a pretty wicked partial differential equation.  With three or more variables it becomes a version of the n-body problem which should have Nobel Prize attached to the solution.  I have absolutely no fantasies about solving such a problem, so I took the how solvable is it approach.


The zeroth law of thermodynamic and the definition of climate sensitivity come into conflict when you try to do that.  The range of temperatures in the lower atmosphere is so large in comparison to the dT=4dF you automatically have +/- 0.35 C of irreducible uncertainty.  That means you can have a super accurate "surface" temperature but the energy associated with that temperature can vary by more than one Wm-2.  If you use Sea Surface Temperature which has a small range you can reduce that uncertainty but you have 30% of the Earth not being considered, resulting in about the same uncertainty margin.  If you would like to check this pick some random temperatures in a range from -80C to 50 C and convert them to temperature using the Stefan-Boltzmann Law.   Then average both and reconvert to compare.  Since -80C has a S-B energy of 79 Wm-2 versus 618 Wm-2 for 50 C neglecting any latent energy, you can have a large error.  In fact the very basic greenhouse gas effect is based on 15 C (~390 Wm-2) surface temperature versus 240 Wm-2 (~-18 C) effective outgoing radiant energy along with the assumption there is no significant error in this apples to pears comparison.  That by itself has roughly a +/- 2C and 10 Wm-2 uncertainty on its own.  That in no way implies there is no greenhouse effect, just most of the simple explanations do little to highlight the actual complexity.

Determining that the problem likely cannot be solved to better than +/-0.35C of accuracy using these methods is very valid theoretical physics and should have been a priority from the beginning.


If you look at the TOA imbalance you will see +/- 0.4 which is Wm-2 and due to the zeroth law issue that could just as easily be in C as well.  The surface imbalance uncertainty is larger +/- 17 Wm-2, but that is mainly due to poor approaches than physical limits.  The actual physical uncertainty should be closer to +/- 8 Wm-2 which is due to the range of water vapor phase change temperatures.  Lower cloud bases with more cloud condensation nuclei can have a lower freezing point.  Changing salinity changes freezing points.  When you consider both you have about +/- 8 Wm-2 of "normal" range.

Since that +/- 8 Wm-2 is "global", you can consider combined surface flux, 396 radiant, 98 latent and 30 sensible which total 524 Wm-2 which is about half of the incident solar energy available.  I used my own estimate of latent and sensible based on Chou et al 2004 btw.  If there had not been gross underestimations in the past, the Stephens et al. budget would reflect that.  This is a part of the "scientific" inertia problem.  Old estimates don't go gracefully into the scientific good night.

On the relevant time scale you have solar to consider.  A very small reduction is solar TSI of about 1 Wm-2 for a long period of time can result in an imbalance of 0.25 to 0.5 Wm-2 depending on how you approach the problem.  With an ocean approach, which has a long lag time, the imbalance would be closer to 0.5 Wm-2 and with an atmospheric approach with little lag it would be closer to 0.25 Wm-2.  In either case that is a significant portion of the 0.6 +/- 0.4 Wm-2 isn't it?

Ein=Eout is perfectly valid as a approximation even in a non-equilibrium system provided you have a reasonable time scale and some inkling of realistic uncertainty in mind.  That time scale could be 20,000 years which makes a couple hundred years of observation a bit lacking.  If you use Paleo to extend you observations you run into the same +/-0.35 minimum uncertainty and if you use mainly land based proxies you can reach that +/- 8 Wm-2 uncertainty because trees benefit from the latent heat loss in the form of precipitation.  Let's fact it, periods of prolonged drought do tend to be warmer.  Paleo though has its own cadre of over simplifiers.  When you combine paleo reconstructions from areas that have a large range of temperatures the zeroth law still has to be considered.  For this reason paleo reconstructions of ocean temperatures where there is less variation in temperature would tend to have an advantage, but most of the "unprecedented" reconstructions involve high latitude, higher altitude regions with the greatest thermal noise and represent the smallest areas of the surface.  Tropical reconstructions that represent the majority of the energy and at least half of the surface area of the Earth paint an entirely different story.  Obviously, on a planet with glacial and interglacial periods the inter-glacial would be warmer and if the general trend in glacial extent is downward, there would be warming.  The question though is how much warming and how much energy is required for that warming.

If this weren't a global climate problem, you could control conditions to reduce uncertainty and do some amazing stuff, like ultra high scale integrated circuits. With a planet though you will most likely have a larger than you like uncertainty range and you have to be smart enough to accept that.  Then you can nibble away at some of the edges with combinations of different methods which have different causes of uncertainty.  Lots of simple models can be more productive than one complex model if they use different frames of reference.

One model so simple it hurts is "average" ocean energy versus "estimated" Downwelling Long Wave Radiation (DWLR).  The approximate average effective energy of the oceans is 334.5 Wm-2 at 4 C degrees and the average estimate DWLR is about 334.5 Wm-2.  If the oceans are sea ice free, the "global" impact of the average ocean energy is 0.71*334.5=237.5 Wm-2 or roughly the value of the effective radiant layer of the atmosphere.  There is a reason for the 4 C to be stable thanks to the maximum density temperature of fresh water of 4 C degrees.  Adding salt varies the depth of that 4 C temperature layer, but not is value and that layer tends to regulate average energy on much longer time scales since the majority of the oceans are below the 4 C layer.  Sea ice extent varies and the depth of the 4 C layer changes, so there is a range of values you can expect, but 4 C provides a simple, reliable frame of reference.   Based on this reference a 3.7 Wm-2 increase in DWLR should result in a 3.7 Wm-2 increase in the  "average" energy of the oceans, which is about 0.7 C of temperature increase, "all things remaining equal".

Perhaps that is too simple or elegant to be considered theoretical physics?  Don't know, but most of the problem is setting up the problem so it can be solved to some useful uncertainty interval.  Using just the "all things remaining equal" estimates you have a range of 0.7 to 1.2 C per 3.7 Wm-2 increase in atmospheric resistance to heat loss.  The unequal part is water vapor response which based on more recent and hopefully more accurate estimates is close to the limit of positive feedback and in the upper end of its regulating feedback range.  This should make higher than 2.5 C "equilibrium" very unlikely and reduce the likely range to 1.2 to 2.0 C per 3..7 Wm-2 of "forcing".  Energy model estimates are converging on this lower range and they still don't consider longer time frames required for recovery from prolonged solar or volcanic "forcing".

If this were a "normal" problem it would be fun trying various methods to nibble at the uncertainty margins, but this is a "post-normal" as in abnormal problem.  There is a great deal of fearful over confidence involved that has turned to advocacy.  I have never been one to follow the panic stricken as it is generally the cooler heads that win the day, but I must be an exception.  We live in a glass half empty society that tends to focus on the negatives instead of appreciating the positives.  When the glass half empties "solve" a problem that has never been properly posed, you end up were we are today.  If Climate Change is worse than they thought, there is nothing we can do about it.  If Climate Change is not as bad as they thought, then there are rational steps that should be taken.  The panic stricken are typically not rational.










Tuesday, March 17, 2015

Latent Heat Flux

While the gang ponders the impact of Merchants of Doubt, the movie, I have drifted back to my questions about "global" latent heat flux.  There is considerable differences between the different latent heat flux products.  In the beginning, back in the day of the younger James Hansen, data was crude at best.  Kiehl and Trenberth produced a series of Earth Energy Budgets that attempted to get all the pertinent information down in an easy to read format.

If you have followed my ramblings you know that I discovered an error in these budgets along with a number of others, but it was Stephens et al. that final published a revised budget a couple of years ago.

This budget was discussed on Climate Etc. back in 2012.  Stephens et al. have latent at 88 Wm-2 +/-10 Wm-2 with K&T likely being the minus 10 and Chou et al likely the plus 10.  The other major difference is the atmospheric window where Stephens et al have about 20 +/- 4 and K&T in their latest are still using the 40 Wm-2 with no indication of uncertainty.

This post was prompted by Monckton of Benchley mentioning the Planck Feedback parameter he derived using the K&T budget.  That was based on the paper, On the confusion of Planck feedback parameters, by Kyoji Kimoto.  At the time I mentioned that Monckton and Kimoto were off because they used the inaccurate K&T budget, but that was prior to a real scientist publishing a revised budget.

Climate science appears to be finally catching up, but various irreducibly simple models never finish connecting the dots.  Tropical water vapor, clouds and convection are the most likely candidates for the regulating feedback.  Around 98 Wm-2 of latent you peg the negative feedback portion of the regulating feedback triggering deep convection.  If you initially assume that latent is 78 Wm-2, one of the initial estimates used by K&T, you have a large range of positive feedback from 78 to 98 or 20 Wm-2 which is about half of the atmospheric window assumed by K&T.  Since an absolute temperature has been a wild assed guess at best along with latent and convective energy flux, the room for warming has always been over estimated.

Andy Lacis on Climate Ect. mentioned they project a 28% increase in atmospheric water vapor which would roughly be equivalent to a 28% increase in latent heat flux.  A 28% increase from 78 Wm-2 would be 99.8 Wm-2 which we are roughly at now if Chou et al had an accurate estimate back in 2004.

The followers of the Merchant of Doubt drivel seem to believe anytime someone mentions uncertainty they are clouding the issue, but when all uncertainties tend to lead to the highest possible estimate, that is a simple sign of bias.  You do not have to be a rocket or atmospheric scientist to be aware of sensitivity of error.  In climate science the old guard are doggedly defending estimates based on extremely poor data while ignoring prompting by peers that they are off.  With accurate latent estimates Earth's Energy Budget is at the elusive Planck Feedback limit.

That does not mean there is not going to be some residual warming at the poles, but most of that will likely be in winter related to energy release mechanisms like Sudden Stratospheric Warming (SSW) and Arctic Winter Warming (AWW) related to increased pole ward "wall" energy flux.

The key to "proving" as in providing convincing evidence, are more accurate estimates of latent heat flux and convective response. The newest Stephens et al paper, also discussed on Climate Etc. appears to have part of that evidence.

It indicates that the northern hemisphere is releasing energy while more is being provided by the southern hemisphere via ocean currents.  Unfortunately, they may not have the estimate of northern hemisphere heat loss to space associated with SSW and AWW events correct.  That may require a more creative non-linear approach likely Fluctuation Dissipation Analysis, but really could be estimated with surface energy anomaly and high northern latitude ocean heat uptake.  Basically, northern high latitude SST has increased while ocean heat uptake has flat lined above 50 north.  That difference after considering uncertainty, should be heat loss not easily measured by satellite at the pole.  That loss can be on the order of 10^22 Joules in a single season.

The northern hemisphere heat loss may or may not trigger a negative AMO phase, which would make the shift obvious or it may just continue destabilizing the north polar jet/vortex until some glacial increase begins to store the energy.  Greenland is beginning to show signs of accumulation, but that may take a decade to be "noticed" given the current state of climate science.  Then again that loss could trigger an ice mass loss event, such is long term climate.

In any case, perhaps the current buzz will inspire some to revisit the much maligned, On the Confusion of Planck Feedback Parameters with more up to date observations.