New Computer Fund

Showing posts with label climate puzzles. Show all posts
Showing posts with label climate puzzles. Show all posts

Monday, May 7, 2012

De-Trending UAH

The University of Alabama, Huntsville Microwave Sounding Unit (UAH MSU) atmospheric temperature data is supposedly the gold standard for global mean temperature. Since I am playing with Open Office trying to get it to do some time series stuff, I detrended the Northern Extent, Tropics and Southern Extent data sets.
The method I used is cludgy, but appears to work fairly well. I add the maximum trend or slope of the individual data series to the start of the series and incrementally decrease the the addition per data point to the end of the series. In this case, there are 399 data points, the first has the slope times 399/399 added to the first and the slope time 0/399 to the final point. This increases the mean of the series by the slope, which I subtract to return the plot to zero. Using the linear regression and the mean value functions in the charting program, the regression after de-trending is equal to the mean which is equal to zero as a check. Obviously, the complexity of the system dynamics would not allow perfect removal, but a good portion of the internal variability would be reduced by subtracting the "average" from each series.
Here is the results of subtracting the "average" from each series and removing the de-trending. The final slope of the Tropics and Southern extent is 0.007 degrees C per year or 0.07 degrees C per decade. This should be the slope of response to continuous forcing change. The slope of the Northern Extent in this case is greater, 0.026 C per year or .26 degrees C per decade. This should be due to water vapor enhancement in the high northern latitudes, amplification from the oceans thermohaline current and land use change amplification of CO2 or other continuous forcing.
In this plot I change the start of the comparison to 1995. There is a minor change in slope that is unlikely to be statistically significant. This is not a proof of the method, but tends to lend some credibility since there is a much larger change in slope using the raw data with the internal variability included. Since there is some speculation that cooling started circa 2000 here is that plot.
In this case there is a statistically significant change in the Tropics and Southern Extent with no significant change in the Northern extent. The length of the series is only 11 years, so there cannot be a great deal of confidence in the significance, but it is slightly over 50% likely to be a shift in the climate, if the method is valid. Solar forcing is the only likely cause, so there may be more solar impact than generally thought in the global oceans which make up the majority of the Tropical and Southern extent surface.
Just to be complete, this is from 2002. This is much too short to be of any significance, but it is comforting to see there is no drastic changes from the 2000 start as far as the methodology is concerned. I am not going to attempt to derive some validation of the method at this time. In the future I may use the same procedure for other regions comparing land to oceans which may help either discredit the method or help determine the degree of land use impact. What will happen, I don't know, but it should be a reasonable way to remove most of the internal variability without trying to specifically target the individual causes. Then again, it could be a waste of time. Initially though, it appears to agree rather well with the Douglass and Christy "Limits on CO2 Climate Forcing from Recent Temperature Data of Earth" results published in 2009 with the addition of a little hint of solar impact. Note: I am sure this method has probably been used before, but I just developed what I am doing here on my own. If anyone has a link to the original development, if it is indeed a valid method, let me know. UPDATE: http://phys.org/news/2012-05-satellite-global-climate-closer.html There will be some adjustments to UAH which will be interesting since while it should increase the overall slope is should also increase the lower slope from 2000. I will revise this when the new data is available. UPDATE 5/9: Since the death of UAH MSU seems to have been announced in the climate world, I thought I would check if the reports are accurate.
Using the same method on RSS, the results are seriously different. Since I had the spread sheet for the UAH detrend minus common signal, I plugged in the RSS tropics and extents using the UAH common trend. This should highlight where the two sets differ.
There is a swoop curve, so the differences are near the start and end. For the start, early in the satellite program, is not all that exciting to see less than stellar performance. It's a learning curve thing ya know. The end though is a bit unusual, by now things should have been improving. So what's up? Notice that the RSS detrended has very pronounced change in the orange, tropics plot. The blue, northern extent plot is subdued. Since these are plotted with the average of the three detrended, the most subdued produced the strongest common signal. So the Northern extent would be driving climate according to the RSS data. While that may sound nifty, the ocean heat content is no piker in the climate game. The tropics trend is negative for the RSS with detrended variation removed, which goes counter to what I would expect. So I may have screwed up or there may be some unintentional bias in the RSS data. So I will go back to check to see how bad I screwed up, but I really suspect the RSS calibration is not consistent with the satellite changes. UAH is also likely to have issues, but it looks like any errors they may have made were consistent, which would mean the data is still useful. Found a issue in the spread sheet so the chart above is updated. Since the Tropics are mainly the issue.
This chart shows the RSS tropics in green with the average of the tropics and extents, detrended, removed. There is no change to the slope, but there may some interesting things.
That is the same plot using the UAH data. In the UAH there is a difference in the slope between the raw and the raw minus the detrend average. There is virtually no difference in the RSS tropics data. Which one makes more sense? Hard to say, but the no extent and so extent have larger changes in RSS. As a note, the GISS surface data is very similar to the RSS when I use the same procedure. So similar, that RSS may have fudged their calibration a touch to more closely match GISS. Now that would generate a little buzz in the remote sensing community :) Still, UAH may be high early but RSS looks low late. Time will tell.

Monday, February 27, 2012

Volcanoes, Ice Ages and Average Temperatures?


Michael Mann has a post on Real Climate on his new paper about the Little Ice Age and volcanoes. The image above is from that post with part of the text to explain the different plots.


This is a plot I made of GISS temperatures I downloaded for the NASA GIStemp site. Since the data for the Antarctic starts around 1902, I averaged all the latitude bands for the period 1902 to 2011 and plotted the global temperature average with and without the poles. With those is a plot of the average of the two poles alone. You can see the familiar shape of the global temperature average in the average of the pole much more clearly than in the global averages as this chart is scaled. For simplicity, the data is plotted "as is" from the GIStemp site in hundreds of degrees. So 200 would actually be two degrees.

While looking into the Siberian agricultural impact on northern hemisphere temperature I noticed that regional volcanoes had a strong impact on temperatures. Kamchatka, the Kurril Islands, Iceland, the Aleutian Islands and Alaska mainly, though Washington State and Japanese volcanoes also have some impact.


Here the Arctic and Antarctic are plotted separately with the global no poles. The Antarctic data is not all that great because of conditions and it is pretty obvious that the fluctuations in measurements start decreasing as we approach the 1960s. What is particularly interesting, is that the 1960 to present period shows a large increase in temperature that is not evident in any of the satellite records.


Here I have plotted the Global average using 1902 to 2011 base period without the Antarctic. The Antarctic is still on the plot so the pre-1960s noise really stands out. Polar amplification of the Greenhouse Effect is projected and a big deal. Other than the surface temperature records, there has been no measurable warming in the Antarctic and the surface stations in the Antarctic are notorious for having issues with being covered with snow drifts which can cause higher than average temperatures of not covered which could give the impression of variability that does not exist. Such is life, but how much impact could errors have on the global temperature average?


First Attribution:
Southern South America Multiproxy 1100 Year Temperature Reconstructions
-----------------------------------------------------------------------
World Data Center for Paleoclimatology, Boulder
and
NOAA Paleoclimatology Program
-----------------------------------------------------------------------
NOTE: PLEASE CITE ORIGINAL REFERENCE WHEN USING THIS DATA!!!!!



NAME OF DATA SET:
Southern South America Multiproxy 1100 Year Temperature Reconstructions

LAST UPDATE: 3/2010 (Original receipt by WDC Paleo)
CONTRIBUTORS:
Neukom, R., J. Luterbacher, R. Villalba, M. Küttel, D. Frank,
P.D. Jones, M. Grosjean, H. Wanner, J.-C. Aravena, D.E. Black,
D.A. Christie, R. D'Arrigo, A. Lara, M. Morales, C. Soliz-Gamboa,
A. Srur, R. Urrutia, and L. von Gunten.

IGBP PAGES/WDCA CONTRIBUTION SERIES NUMBER: 2010-031

WDC PALEO CONTRIBUTION SERIES CITATION:
Neukom, R., et al. 2010.
Southern South America Multiproxy 1100 Year Temperature Reconstructions.
IGBP PAGES/World Data Center for Paleoclimatology
Data Contribution Series # 2010-031.
NOAA/NCDC Paleoclimatology Program, Boulder CO, USA.


ORIGINAL REFERENCE:
Neukom, R., J. Luterbacher, R. Villalba, M. Küttel, D. Frank,
P.D. Jones, M. Grosjean, H. Wanner, J.-C. Aravena, D.E. Black,
D.A. Christie, R. D'Arrigo, A. Lara, M. Morales, C. Soliz-Gamboa,
A. Srur, R. Urrutia, and L. von Gunten. 2010.
Multiproxy summer and winter surface air temperature field
reconstructions for southern South America covering the past centuries.
Climate Dynamics, Online First March 28, 2010,
DOI: 10.1007/s00382-010-0793-3

I hope that covers everyone :) The comparison of the GISS Antarctic region versus the temperature reconstruction by all those guys, does look to me to be all that great of a match. Polar amplification due to greenhouse gas forcing can have a large impact on global temperature. Polar amplification due to poor instrumentation can also have a large impact on global temperature. Which is which in this case, seems to go to the poor instrumentation part of the puzzle. There reconstruction uses tree rings which are not thermometers, so one would be more likely to trust the instrumentation, that is not always the best choice though. Trust nothing - verify everything.

This post is just on some of the questions I have on what data should have more weight in determining average global conditions. The long term tree ring proxies do not provide a good range of temperatures, but they should provide a fair indication of what "average" conditions should be.

Saturday, February 25, 2012

The Tropopause and the 4C Ocean Boundary Layers

I have a nasty habit of comparing the the Tropopause and the 4C ocean thermal boundary layer in a way that is not very clear. This is mainly due to my looking at the situation more as a puzzle than a serious fluid dynamics problem. As I mentioned in a previous post, I am looking for a simple back of the envelope method of proving the limits of CO2 radiant forcing to a reasonable level of accuracy.

The main similarity is that both are thermal boundaries with sufficiently large sink capacity to buffer changes in radiant forcing. Their mechanisms are different but the impacts are very similar.

The ocean 4C boundary is a combination of thermal and density mechanisms that result in interesting thermal properties. Warming the 4C boundary from above results in upward convection which tends to reduce the impact of the warming. The heat loss from the 4C layer has to be from warmer, 4C to colder but also has to allow for constant density. If not, there would be turbulent mixing and there would be no 4C boundary layer.

So cooling or actually maintenance, of the 4C boundary occurs mainly in the Antarctic region where the air temperature is cold enough to cause the formation of sea ice. This also occurs in the Arctic, but seasonal melting produce less dense fresh water that has to mix, with turbulence, with the denser saltwater. If there were no turbulent mixing there would be lens of fresh water constantly in Arctic summer. In the Antarctic, much more of the sea ice survives the summer months, so there is continuous replenishment of the 4C maximum density salt water slowly sinking in the southern pole that creates the deep ocean currents. Turbulent warming of the 4C layer in or near the tropics causes rising convection from the 4C boundary layer which impacts the rate of replenishment from both poles. It is a very elegant thermostat for the deep oceans, laminar replenishment versus turbulent withdrawal.

The Tropopause is similar but different. Non-condensation greenhouse gas radiant forcing balance conductive, convective and latent cooling response. The Antarctic winter conditions are controlled by the non-condensible radiant effect primarily which result in a maximum low temperature equal to the amount of non-condensible radiant forcing for that temperature range.

The lowest temperature ever recorded in the Antarctic is about -90C and that would be the lowest temperature in the Tropopause if it were not for non-radiant energy flux. The average temperature of the Tropopause is closer to -60C, which indicates that the average impact on non-radiant energy flux is on the order of 30C in the Tropopause. That is a fairly large buffer range. In addition to that range, the Tropopause altitude can vary so for short term perturbations, the temperature can drop to -100C possibly a little more. The Tropopause temperature cannot decrease much lower because stratospheric warming due to ultra violent solar radiation interacting with oxygen in the dry region above the Tropopause.

Non-interactive outgoing long wave radiation, the atmospheric window portion of the spectrum is less in the Antarctic due to the Stefan-Boltzmann relationship, ans should be on the order of 9Wm-2 at -90C implying that the actual CO2 portion of the Tropopause limit is on the order of 55 to 60 Wm-2. As more CO2 forcing is applied, the percentage of non-interactive OLR would increase, offsetting approximately 15% of the impact.

As surface temperature increases, the non-interactive response would continue to offset approximately 15% or the non-condensible GHG forcing and the conductive/convective and latent fluxes would offset more with the changes in temperature, gas mix and pressure. Convection, which is a function of temperature, density and conductive properties, is the non-linear part of the puzzle that causes the uncertainty in a pure energy perspective while albedo, surface and atmospheric, change just adds a new layer of complexity.


In both the 4C and tropopause boundary layers, virtually immeasurable changes can have a significant impact on the heat sink capacity of each and each have extremely different time constants. More complexity, making this an outstanding puzzle!

So this post hopefully will explain why I compare these two thermodynamic layers as I do, though the mechanisms are very different.

Tuesday, February 7, 2012

The Ideal Gray Body?

Not being to figure out what the exact value of radiant forcing is in an atmosphere is ridiculous. I mean really! We have pretty good approximations of black bodies, why so much grief over a gray body, a body with a radiant active atmosphere? Sounds kind of stupid for so many smart guys working on simple problem.

Well, since we don't know, I am going to define an ideal gray body. An ideal gray body has a surface emissivity of 1, a perfect black body and an atmospheric emissivity of 0.5, exactly half of its radiant energy is convert to heat while leaving the surface. Cool huh?

So if the Earth was an ideal gray body, its emissivity would be 0.5. So looking from the surface, if the surface emission was 400Wm-2, then the average radiant layer would be 200Wm-2. Since the radiant energy has to cool above as much as it warms below, the TOA would 100Wm-2 plus half the energy from the average radiant layer, 100Wm-2 for the TOA measurable emission of 200Wm-2. Remember, half of the 200 is required to create the 100Wm-2, with half passing through. I went through this with the multi-disc model so I will add that link later.

So why this crazy brain storm? Well, the the TOA emissivity is about 0.61, the Earth is neither an ideal black nor gray body. Since it misses perfection by about 0.11 out of 0.5, about 22% of the energy transfer is not playing the ideal game. If the surface is 400Wm-2, the 200Wm-2 would be the ideal emission to space. If the TOA emission is 240Wm-2, the ideal surface emission would be 480Wm-2. 22% of these would be 44 or 52.8 depending on which is the better baseline value.

Those both make sense. The 44Wm-2 is approximately the surface to space energy through the atmospheric window and the 52.8 is approximately the cloud level to space energy through the atmospheric window. So the Earth is about 78% gray body and 22% black body.

78% of the surface flux would be 312Wm-2, since perfect gray bodies emit 50% of their surface energy, 156Wm-2 would the apparent temperature of the ideal gray body Earth viewed from space. 156Wm-2 would also be the radiant portion of the greenhouse effect.

The surface is warmer than that though. More energy would have to be transferred to the atmosphere. For the 52.8Wm-2 lost through the atmospheric window, at least half, 26.4Wm^2, would need to be transferred to the atmosphere by....? I would think conduction.

Wednesday, February 1, 2012

Trying to Piece Together Russian Agriculture in Siberia

Thanks to revolts, revolutions and world wars, the history of Russian agriculture in the Siberian region is a puzzle. So this is just a deposit of what I happen upon.

Wikipedia, not a great but easy resource, mentions that over 10 million Russians migrated to the Siberian region following the completion of the trans-Siberian railroad. On average, each was given 16.5 hectares to move into the region. 16.5 million hectares is 0.165 million kilometers squared. This migration took place between 1890 and 1910. This impact does not show up in the Taymyr reconstruction as an increase in optimum growing conditions. There is a small increase in a generally downward trend, but nothing that stands out as significant.

http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2011&month_last=12&sat=4&sst=1&type=anoms&mean_gen=12&year1=1880&year2=2011&base1=1900&base2=1930&radius=1200&pol=reg

There are some interesting points though. While the whole series does not match, the blip does have some back up with local temperatures around Omsk. Also, the Russians were producing a great deal of wheat inefficiently by world standards to pay off debt. With the new Siberian railroad, they was probably a good deal of timber harvested for export as well. I will need to check other regional commodities reports for export information.


Unfortunately, the Taymyr tree ring reconstruction stopped in 1970 because, according to Jacoby, they stopped responding to temperature. Well duh! So I need to find the source of the raw data to extend the reconstruction to at least 2000, to see if the 1994 climate shift may show in the tree rings.


That is hard to read, so I will have to break it down or find a better program than paint to convert the chart. Anyway, the stratospheric shift is easy to see, both the mid troposphere and lower troposphere also shifted in a similar ratio. Not too exciting, but I think it is a much longer period shift that I may be able to match with the Taymyr and CET. Pretty iffy, but interesting.

Monday, January 30, 2012

Data Leap Frogging


Remote Sensing Systems (RSS) is one of the groups that use the Mircowave Sounding Units (MSU) on board satellites to develop atmospheric temperature products. While the MSU data has its issues, in general it is the best source of atmospheric temperature information we have. In order to build their data sets, the have to weigh layers of the atmosphere using filters on the data. The chart above show the weighing for the four different products.

Starting at the surface, I will call the products A,B,C and D. As you can see, there is considerable overlap between A and B, B and C, C and D, which would tend to suppress the information when the adjacent layers are compared. Leap Frogging, would compare A to C, B to D and A to D, to reduce the signal suppression. Not a very complicated thing to do, right?

Then B could be compared to A-C, A-D and B-D to determine the best approximation for a direct comparison to A or C. While a little complicated, it would improve the confidence in the values for each layer.

To take advantage of this leaf frogging, I have recommend that a Bucky Ball shaped model of concentric spheres be use. Again, a little complicated, but by using the center of the Earth as a distance vector and Bucky shaped areas, much like the sections of a soccer ball as target areas, a three dimensional model of the common thermodynamic boundary layers of the Earth climate system can be made to better determine the energy flows between layers and sections.

Using the modified Kimoto equation adapted for what I call learning mode, the fungible characteristic of energy flux can be used to more accurately track the various energy flows and energy lost to heat in transit from any point in the model.

Constructed in this manner, the model would be comparable to the Relativistic Heat Equations as the relative velocity of energy flow could be roughly determined between adjacent thermodynamic boundary layers. A simple concept, not so simple to develop fully, but it should be capable of continuous modification and more of the relationships between boundary layers is learned. Kinda like complex modeling for dummies.

Friday, January 27, 2012

History of Modern Agriculture and Climate Change

I have been messing around with the physics side of the Global warming issue for a while and things don't add up the way I would think. CO2 has a radiant and conductive impact on the physics of the atmosphere, but neither can manufacture energy on their own. So I started looking into land use changes. Land use could allow more energy to be absorbed and less reflected.


The chart above is on the tree ring reconstruction by Jacoby et. al 2006 from samples taken from the Taymir or Taymyr Peninsula in Siberia. Jacoby et. al consider this to be a temperature reconstruction. Temperature plays some role in tree growth, but general tree rings are an indication of growing conditions. Jacoby et. al mention that after 1970, the temperature stopped indicating temperature, so the series ends in 1970. This is the divergence problem that is pretty well known, I would have preferred that they left the final years of data in place, but I am too lazy to go tracking that down.

I plotted only the raw reconstruction data, no smoothing. As I have been known to say, most of the data is in the exceptions or anomalies, not in the smoothed data. Here you can see it in its noisy glory.

I will try to get a cleaner version to show that beginning in approximately 1820, the slope of the noise increases in the positive direction considerably more than the overall slope which is slightly positive. The reason I point this out is the invention of the steel plow.

The 19th century was the dawn of the agricultural revolution. The steel plow, the wheat harvester, the cotton gin were all invented in the late 18th and early 19th centuries. The agricultural revolution kicked off the industrial revolution.

The Taymyr tree ring reconstruction may indicate that agricultural expansion started the warming we now notice with much better instrumentation. Quite a bit of uncertainty, but enough correlation to be interesting. The biggest indication in the tree ring reconstruction is the decrease in the duration of extremes. Temperature will fluctuate, but cleared farmland is valuable and farmers will find a way to get as much in production as early as possible. So in spite of all of the noise in the data, climate appears to be growing more stable. That is an interesting contradiction of the Global warming theory, if true.

This is a very busy chart of the Taymyr data:



There are three regressions added. The light blue, hard to see since there is little change, is the pre-agricultural linear regression. The red is the agricultural regression, starting at approximately 1814. The green is a power regression of the nasty looking green noise.

The nasty looking green noise is double the annual change. It is doubled to make it stand out. All it is, is the year plus 1, minus the year, doubled. The slope decreases slightly over time. I will try to locate the signal processing brain cells I misplaced during the late 70s and plau with some nifty filters. For now there is a 25 year average buried in the business of the chart.

I may take Tonybs CET and attempt a splice of instrumental to Taymyr tree rings :)

Sunday, January 22, 2012

More on What the Heck is Down Welling Long Wave Radiation

Probably the most misunderstood perception of mine and many others in the Climate Change debate is the concept of Down Welling radiation or back radiation caused by the greenhouse effect. Per the second law of thermodynamics, heat flows from warm to cold. In truth, NET heat flows from warm to cold, so a colder body cannot physically warm a warmer body. The colder body can reduce the rate of cooling of the warmer body so that it would be warmer than if the colder body were not there.

Some people tend to get carried away trying to tweak the second law by stating that a photon traveling in a random direction can be absorbed by a warmer body after being emitted by a colder body. Quite true, but in the process, the colder body would absorb more photons from the warmer body, because the warmer body is emitting more directionally random photons. Net flow will be from warmer to colder, period.

On the surface of the Earth, photons from the colder sky do impact the surface on rare occasion. More frequently they impact molecules closer to their physical location and temperature. Since the sky has a temperature, it emits photons and a large percentage of those photons travel toward the warmer surface. That direction of travel is called Down Welling Long Wave Radiation (DWLR), but where that DWLR impacts changes with atmospheric conditions.

Why many disagree or misunderstand my perception of DWLR, is the result of my choice of a thermodynamic frame of reference. I live on the surface, the surface is my frame of reference. So how can this be controversial?

The base line for determining the magnitude of Greenhouse Effect (GHE) is an Earth with no Greenhouse Gases (GHGs). That thought experiment Earth still has an atmosphere and still has an albedo, or reflection of incoming solar energy. My visualization of that Earth is a semi-solid sphere surrounded by a more fluid atmosphere. Since there is no GHE, both the surface and the atmosphere would emit radiation, only less of the surface radiation would be absorbed by the atmosphere. The atmosphere, which would have a high viscosity since its rate of radiant cooling would be lower than the surface, would be nearly isothermal or approximately the same temperature at every level due to conductive heat transfer from the surface to the atmosphere.

Standing on the surface of the no GHG Earth, you would measure the same temperature in all directions. That temperature would be approximately 255K degrees, which would emit approximately 240Wm-2 in all directions. That is the no GHE zero DWLR value. If you prefer, the 240Wm-2 would be the background radiation value. That is unique to my frame of reference. A Top of the Atmosphere (TOA) reference would make assumptions that the thickness of the no GHG Earth atmosphere is negligibly small. I disagree.

With GHGs, the surface temperature on average is about 288K degrees with an energy flux of approximately 390Wm-2. The surface impact of the GHE would be 390Wm-2 minus 240Wm-2 or 160Wm-2. Since the source of that DWLR is not the surface, but some point in the atmosphere, the source value of DWLR would be greater than 160Wm-2, if it is indeed a true source of reflected energy averaged over the entire atmosphere. Depending on the altitude of that source of DWLR, the value would vary.

Since the source is likely in the lower atmosphere, near the average mass of the atmosphere, I estimate the average DWLR magnitude as approximately 220Wm-2. That value is based on my estimate of the average emissivity between the surface and that point, 160/220 equals 0.73 or the average emissivity from the DWLR source to the surface. There is no perfect energy transfer, so there will be energy lost in the transfer, if DWLR is a true source of energy.

Since we are comparing a real world to a thought experiment world, the real world values would have to accurately compare to the thought experiment values or there is apples and oranges being mixed.

Many chose a TOA frame of reference. From that perspective, the emissivity is approximately 0.61 which would require, 160/0.61 = 260Wm-2 of GHG produced DWLR or as some seem to think, a nifty power series manipulation to arrive at approximately 330Wm-2 at some point near the TOA.

As long as the demands of the choice of frame of reference are carefully met, either can produce accurate results. Mine, IMHO, is easier to maintain, more flexible and provides more information for the actual surface.

Part of that information is the tropopause temperature. In order to warm the surface, the GHGs would have to cool the tropopause. Increasing the surface temperature by 33C from 255K to 288K would decrease the tropopause from 255K to 222K, which would be approximately -51 C degrees. Which is about the range of the average tropopause temperature after allowing for all the approximations. In order to meet the requirements of conservation of energy, 240Wm-2 of DWLR would be the maximum limit of the GHE. That would be the equivalent of perfect insulation by all GHGs. In order to exceed that limit, the tropopause would have to start warming the stratosphere. While that is possible, the amount of additional GHGs available appear to limit that possibility. That I am working on, what is the realistic limit?

Saturday, January 21, 2012

The Speed of Second Sound

While climate change junkies quibble over which theory is the most fun to bash or support, I keep thinking no one has proposed a method that can even come close to ending the controversies. This has lead me to what is either outside the box creative problems solving or the nut house. Maybe a little of both. Relativistic Heat Conduction, RHC, is the part most people think is nuts.

One of the controversies of RHC is the speed of second sound. The limit on the rate of Phonon flow in a media. The Phonon is a hybrid, thermal quantum stuck between being a photon or an electron only a little on the slow side. So RHC modified their heat equations to include a C squared term, just like the big boy relativity equation with the little c squared term where the little c is the speed of light.

Photons and phonons are different in that photons are accepted as being real, though theoretically difficult to describe and photons are just plain theoretical, but their existence would simplify things.

Real or not, a quantum of thermal energy is something that can be useful. The speed limit for that quantum in a media would be nice to know. The momentum of that theoretical quantum limited by some characteristic of its media would also be nice to know. With some standard for the phonon flow characteristics, derived from basic thermodynamics, we have something that could be directly compared to the real photons.

Real photons leaving the Earth would love to travel at the speed of light, but they can't, even if they are not absorbed by gas, liquid or solid molecules. When they are absorbed, that is a rather dramatic change in velocity. Since a photon has no mass, only momentum in its particle disguise, there is an assumption that absorption energy lost to the molecule is very small with respect to the energy of the photon. With high energy photons, reflection is assumed to be perfectly elastic. It is after all, very small with respect to the energy we can measure.

Low energy photons, are more likely absorbed than reflected. Since energy and mass are related, the massless low energy photon has less masslessness than a high energy photon. The low energy photon's velocity from the surface to space is reduced by a fraction of a second because of the refraction, absorption, emission and collisional transfer of its changing energy and degree of masslessness.

At some point, the theoretical masslessness of the real photon is likely to approach the finite speed of second sound limit of the theoretical quantum called the phonon.

Now here is the fun part, the finite speed of second sound is controlled by the density of the media which is influenced by gravity that has its own theoretical quantum called a graviton. Now wouldn't it be a pip if the low energy photon, phonon and graviton all converge on an energy and degree of masslessness that was finite?

That might even explain some of the other weirdness in the universe. Like solar wind particles getting a boost from a theoretical nearly massless quantum of energy which would produce a higher velocity with one relative impact and a lower but still high enough for escape velocity from another relative impact.

All of this is of course just musings. I do though think it would be fun to build a model based on the Kimoto equation and RHC to see where it might lead. After all, we have a crap load of data, why not have fun with it :)

Tuesday, January 10, 2012

Non Equilibrium Thermodynamics and Climate

Dr. Judith Curry has an interesting post on Non Equilibrium Thermodynamics. This is right up my alley since I have been trying to explain why the potential warming cause be CO2 is half of estimated. Maximum/minimum Entropy is the controlling range of the atmospheric effect. Perfect insulation, would be minimum entropy and maximum cooling would be maximum entropy. So this should at least get a few people on the same page.

The main reason I am considered a whack job is the conductivity impact of CO2 on the atmosphere. As I have mentioned before, the thermal coefficient of conductivity for CO2 is non linear peaking at - 20 C degrees. The potential impact that has on climate is obvious to me, but no so obvious to others for some reason.

The paper does address conductivity somewhat but only as mass transfer and mixing. CO2 has some impact there, but the main conductive impact is at thermal boundary layers, mainly the ocean/atmosphere but also at the latent/radiant layer as well.

CO2 enhances conductivity at the ocean/atmosphere boundary layer really two ways. First actual conductivity or collision with warmer molecules. CO2 has twice the sensible heat capacity of standard air at -20C and about 5 times at 20C degrees. Also by photon absorption and collisional transfer to other gas molecules.

Of course, CO2 is just another molecule between boundary layers, all though it can transport more heat than the other molecules save water vapor due to its latent heat.

The tougher part to explain is the latent/radiant boundary layer enhancement. Here CO2 can only direct half of its absorbed energy generally down via emission, but all of its absorbed energy via collision just as at the surface boundary layer. Both transfers would heat the surrounding gases increasing convection. Should that heat involve a water phase change, there is the major enhancement. This is where I need better Maximum Local Emissivity Variance information.

With the Non Equilibrium Thermodynamics principals in mind, perhaps some may notice that CO2 does have a significant impact on both maximum and minimum entropy.

I am still at a loss on how explain the 65Wm-2 sink, which I have not found a paper yet that addresses. Should that get resolved, then I may be able to get the modified Kimoto equation somewhat accepted since it explains pretty much all this crap.

Tuesday, January 3, 2012

Doubling of Carbon Dioxide Does What?

Viewing the typical conversations on the climate science blogs I was struck by the humorous logic used by the doomsayers. Since RealClimate is the ultimate source for all things blog climate science let's see what their logic implies.

Since CO2 lags warming in the Vostek ice cores, realclimate stated that about half of the warming from the glacial to interglacial periods is due to CO2 increasing in concentration. Realclimate are fans of the Arrhenius equation for global warming where the increase on CO2 has a natural log relationship with temperature. So some value times ln(c1/co) is the increase in temperature due to CO2.

The glacial periods had an average concentration of 190PPM CO2 that increased to about 280PPM at the pre-industrial part of the interglacial. ln(280/190) equals 0.39 so 0.39 times some value would be about half of the warming from glacial to interglacial due to CO2. Since the pre-industrial period, CO2 has increased to about 390PPM. ln(390/280)equals 0.31 which has caused a maximum of 0.8 degrees C of warming. Let's be generous and call that one whole degree and forget about the little ice age. So if 0.31 caused one degree, the multiplier needed for determining the impact of CO2 concentration change using Arrhenius' formula would be 1/0.31 or 3.0 allowing generous uncertainty.

So if we doubled from our present concentration to 780PPm, 3.0ln(780/390) equals warming would increase by 2.07 degrees. Obviously, CO2 would require a lot of help to generate more than 3 degrees warming from the pre-industrial conditions.

From the glacial to now, CO2 got a lot of help since realclimate says that only about half of the warming is due to CO2. From the glacial until now, 3.0ln(390/190) equals 2.15 degrees of warming. Now we have a few options:

If I used the half of the actual increase of about 0.7 degrees instead of 1 full degree for the pre-industrial to now that would make the impact of CO2 pretty small. So let's use the full observational increase for the industrial age warming of 0.7 degrees. The glacial to interglacial was about half due to CO2 so let's say that during the glacial period the Earth was twice 2.15 or 4.3 degrees cooler. Thanks to man screwing things up, now is 4.3 plus the 0.7 or 5 degrees warmer than it was during the last glacial period. So from then to now the factor would be 5 and not 3 for the Arrhenius equation. 5ln(780/190) equals 7.06 degrees warmer at 780ppm than it was at 190 during the last Glacial period including a lot of help of at least half. So with a lot of help, at 780ppm the Earth would be 2.06 degrees warmer than it is today.

So if I use the Arrhenius equation and the estimate by realclimate from the last glacial, there is only 2.06degrees more warming "in the pipeline" if CO2 peaks at 780PPM. Let's say that man is completely stupid and we increase CO2 to 1000ppm. Then 5.0ln(1000/190) equals 8.30 or 3.3 degrees more possible warming including the same help that climate received from the last glacial until now. Since climate didn't get that much help from 280 until now even, assuming that the little ice age was the average temperature, do ya think that more than 3 degrees for an increase to 560ppm might be a little bit over estimated?

That is the controversy. Not if CO2 may warm the Earth but how much.

Tuesday, December 27, 2011

Conductivity, Polar Orientation and the 4C Boundary

The coming ice age, whenever that happens, is linked to the increased conductivity of the atmosroughly 400phere and ocean due to increase CO2 and the orientation of the poles both with respect to solar irradiation and geomagnetic forcing. Big, bad and bold statement.

The polar oceans are the deep ocean thermostat. The only way that the deep oceans can gain or lose heat is by conduction. The only place on Earth for the deep oceans to lose heat is in the polar oceans. So my bold statement is not out of school, it has to be.

From a CO2 concentration of approximately 190 parts per million to roughly 400 parts per million, conductivity of the atmosphere increase by about 0.1 percent. Small potatoes on a century scale or less, but over thousands of years it adds up.

Polar orientation varies the rate of cooling of the deep ocean at the 4 C boundary because the Antarctic is a continent. When the geographical and geomagnetic poles are centered over the Antarctic land mass, less cooling is possible. Off centered orientation increase the area of the 4C boundary to atmosphere interface.

Without geomagnetic forcing, it is obvious that conductivity, polar orientation and the area of the 4C boundary of the ocean, the source of the down welling current that drives global climate, are the millennial scale drivers of glacial cooling. The question still is how much impact geomagnetic variability has on the mechanism.

Friday, December 23, 2011

Now the Complicated Part

The change in CO2 concentration is pretty easy to see. The obvious always gets the blame. To solve puzzles you have to decide if the answer is a simple joke hidden in a complex picture or if the picture is really complex. The Earth climate system is pretty complex.

In Let's Concentrate on Concentrations I tried to show that there is a small changing in concentration of CO2 that has a rapid initial radiant impact that decreases with time and a slow conductive impact that increases with time. The solar output has daily rapid changes due to rotation, annual changes due to orbit and obliquity of the axial tilt of the Earth, processional changes due to wobble around the axial tilt and finally, changes in geomagnetic intensity that vary with the solar cycle and internal dynamics of the molten core. There are layers upon layers of small changes with differing rates of change. That is a pretty complex system. Missing from the debate is the relative rotation of the complex layers, where the Bucky ball shaped model is helpful.

Imagine standing on the surface and looking up to the tropopause. The surface has a radius R and the tropopause has a radius of 2R. The surface is turning at velocity S and the tropopause is turning at a rate of S/2 for simplicity. Whatever energy the surface gets from the sun is 1/2 what the tropopause gets because of the difference in radius and area of the tropopause relative to you observation point get twice as much because it is moving half as fast. All else being equal, our climate is based on these two layers accumulating energy at different rates, different times and at different relative positions. No big deal right?

Now let's change the sun by a watt. That has 1/4 Watt impact on the surface and 1 Watt impact on the tropopause. Because of the relative velocities of these two layers, there is a large potential difference between the layers that a small change in solar would indicate at first blush.

With the ink in the aquarium experiment, CO2 has more impact on the change near the surface which approaches saturation more quickly than the tropopause. We have yet another impact that differs due to the different relative properties of the two layers.

Now I have to go fishing, but which do you think might be impacted more by a change in geomagnetism and solar magnetism, the surface or the tropopause?

The tropopause may indeed respond to fluctuations in the geomagnetic field. Plus the upper troposphere total energy relative to the surface, which would include the velocity of the jet streams, produces a complex dynamic relationship influence by natural cycles including the solar wind, magnetic orientation as well and TSI fluctuation. This would tend to reinforce Milonkovic cycle theory.

The correlation of climate with solar, including geomagnetic has bee done to death. So the next step is figuring out the magnitude of the geomagnetic impact and proving the limits of CO2 forcing yet again.

The impact of the change in atmospheric conductivity with CO2 increase appears to be a good clue for the scale of change required. Improved thermal conductivity with increased CO2 is small, but much more linear than radiant impact. Since it mainly effects the ocean surface to atmosphere interface in the southern oceans, millennial scale changes in the thermal conductivity approach a balance with other forcing. This would require multidecadal or century scale reductions in solar forcing to increase snow/ice cover to the point where albedo change can continue a cooling trend allowing more absorption and sequestering of carbon dioxide.

The last little ice age, a century scale event, should be typical of an off major cycle cooling response. Major cycles being linked to the Milankovic cycles which would produce millennial or multi-millennial cooling/warming events.

The best estimate I have to date for solar minimum impact is 0.25 Wm-2 at the surface. That would require 4 to 8 times the length of a solar half cycle, approximately 5 years, to trigger a new little ice age. Now I just have to fine tune that estimate to get a rough estimate of how much geomagnetic change may be required, which is not easy for a variety of reasons. Fun, fun, fun.

Enjoy your holidays!

Tuesday, December 20, 2011

More Let's Concentrate on Concentrations

Continuing on how a small change in a trace gas can have big impacts I want to consider what an approximate 0.03% increase in the overall concentration of CO2 in the atmosphere might do.

The lowest approximate concentration of CO2 estimated for the atmosphere in the last few hundred thousand years is about 190 parts per billion (ppm). 0.000190 CO2 molecules per ever one molecule in the atmosphere. Doubling that would be 0.000380 CO2 molecules, which is were we are now. Since the last major ice age, CO2 has doubled.

Arrhenius in his 1896 paper was attempting to prove that CO2 triggered the ice ages to warm period transitions. As I have noted before, his final table showed the temperature response over land and water for a change in CO2 from 0.67 to 1.5 from the value of his day, over 100 years ago. Based on his calculations, we are near the peak value of the Holocene. Arguably, we are near the peak value of the Holocene, about the last 12,000 years.

One thing Arrhenius did not address in his paper was how the Earth entered the ice ages. His thoughts were that CO2 was the driver, but what reduced the concentration of CO2 if it was the climate driver?

Callendar, in the 1930s also pondered the role of CO2 in climate. He determined that the impact of CO2 approached two degrees as the concentration of CO2 approach a doubling and that the impact leveled off at that point. From his day to now, CO2 is approaching a doubling and temperature response is arguably approaching 2 degrees.

After Arrhenius' paper, Angstrom commented that CO2 was approaching saturation meaning that CO2 could not produce the range of temperatures that Arrhenius had predicted. Based on Arrhenius' final table and his unpublished retraction of the range of temperatures he predicted in his 1896 paper, 1.6 (2.1) with water vapor was his estimated range was in agreement with Callendar which is in agreement with the saturation Angstrom mentioned, which is in agreement with the current temperature data, all seem to indicate that 2 degrees is roughly the maximum impact of CO2.

Ramathan, discussed in this post on the Science of Doom, also seems to agree if you carefully consider his work. In the block diagram of the CO2 warming process, he list the direct impact of CO2 on the surface for a doubling to be 1.2 degrees. Any further warming would involve interaction with water vapor. Since his initial concentration of CO2 was greater than 190ppm, approximately 280 ppm, his results are in general agreement with Arrhenius, Callendar and Angstrom.

For some reason though, current science still disagrees that CO2 has an impact on climate but that impact is limited. So how does a guy or gal at home figure out for themselves who to believe?

Why not do an experiment at home? So for a simple experiment find an aquarium that is not currently occupied by living fish. Fill the aquarium with clean water. tape a piece of newspaper on one side and the back of the aquarium and read the print. The print should be about the same size on the side and back.

Drop one drop of black ink in the water. Read the print after the ink diffuses in the water again trough the aquarium from the side and from the front.

Now put one more drop of ink in the aquarium and read again.

How much harder was the paper to read after one drop? How much harder was it to read after two drops?

If you want to make the experiment really scientific, measure 999,998 drops of clear water into the aquarium. Once you add the two drops of ink, you have 2 parts per 1,000,000. Now add 188 more drops and you have 190 parts per million roughly.

If you have a photographer in the house, you can use a light meter to measure the amount of light passing through the aquarium. As long as you have a relatively good light source and light meter, you can measure the change per drop or tens of drops and record the results for graphing both from the ends and from the front of the aquarium.

To make the experiment even more scientific, you can use red ink with a red light and green ink with a green light. Then compare one or both of those to the reading of a white light. The red and green light measured reduction in intensity will reduce more than the white light intensity. You have just modeled the atmospheric response to a change in opacity.

To really kick the experiment up to quality scientific standards, repeat the experiment with red and green inks. Add both the red and green at the same times and at the same amounts until you have a data to about 400 drops for each ink.

Have fun! After you have all the data you want, plot the data and fit a curve to the changes. What does the curve look like from one end to the other? What does the curve look like from front to back? So you can read from front to back a little better than from end to end?

The end to end represents the surface looking up to space. The front to back represents some point in the sky looking up to space. Think about it.

I will work on an experiment for change in conductivity with more CO2. It will not be as much fun because the plot is boringly straight.

Oh, should you see someone dramatically demonstrate that one drop of ink in pure water makes a big difference, think about how much difference that one drop makes if it happens to be drop number 401.

This brings me to the fun part of the impact of the change in concentrations. Optically, the increase is reaching a plateau. Constructively, it is just a slow boring increase. Each impact has effects on different feed backs with different time constants. Radiant forcing is quick. In just a decade or two there can some indication. Conductivity takes a long time to have an impact. Thousands of years maybe tens of thousands of years. Nothing like that in the climate records though :)

Sunday, December 18, 2011

Let's Concentrate on the Concentrations

I can compare apples to oranges. Apples make better crisps and pies, oranges make better cocktails and poultry glazes. A little orange zest goes a long way in an apple popover to give it a little zing with coffee in the morning. Small amounts make things happen in life. Carbon dioxide is a small thing that can have a big impact or not depending on the recipe.

I recently recommended that a couple of bloggers join up and do a post on carbon dioxide. Both are bright and knowledgeable with opinions on climate change. DeWitt Payne has been experimenting with a greenhouse effect experiment. He is meticulous in acquiring data to prove that CO2 enhances the performance of a greenhouse. He will find that it does because it does. That is what he is looking for and he will find it.

I mentioned to him before he started that real greenhouses might be a pretty good place to get ideas. He built his experiment around a box with very well insulated sides and bottom avoiding cardboard and wood which would have some moisture content that might out gas when heated, complicating his experiment. Along with his direct measurements, he has also included a little work on the radiant physics of CO2 by determining the change in the mean free path of photons absorbed and emitted by CO2. He stated that the change in the mean free path is dependent only on the change in concentration of CO2. Which is why I want him to clean his work up and publish it online. Carbon dioxide's radiant properties depending only on the concentration is his apples. My oranges is that the change in concentration of CO2 also changes the thermal conductivity of the air.

The apples and oranges thing makes a big difference in what would be a proper evaluation of the greenhouse effect. There are lots of stumbling blocks that can lead to gotcha moments, the first being what is the initial concentration.

If we were only concerned with the radiant properties, any old concentration would do. But since the end result is to compare the results of the box to the Earth, another initial concentration would be in order.

Ice core in the Antarctic have the longest record of temperature and CO2 change that we have. Those ice cores indicate that about 190 parts per million (ppm) is the lowest concentration that has been in the atmosphere for at least the past 400,000 years. So I am of the opinion that about 190 ppm is the place to start and the Antarctic is the benchmark for the comparison. Inferring that some other place on the planet does something based on what happened in the Antarctic, without knowing what really happened in the Antarctic, is not all that smart in my opinion. Let's just stick to the apples and oranges before making ambrosia.

So the concentration in the Antarctic has changed naturally from about 190ppm to about 280ppm. Currently the concentration in the Antarctic is about 370ppm due to man doing manly things (womanly would work but they don't like taking credit for screwing things up, at least in my household).

So there are a few things I would like to consider in this experiment, Temperatures, Conductivity, Radiant interaction, and Nocturnal performance. The nocturnal performance is something I think is pretty important.

The greenhouse effect is not so much about how hot things can get but about how cold they could get. With about six months of no sunlight and about six months of not very intense sunlight, it just seems logical to me to concentrate on the reason we are not freezing our asses off before we figure out how bad being warmer might be. So with an average surface temperature of about -50C or 223K, how much benefit of the greenhouse effect is the Antarctic getting?

Since this post is going to become a little complicated, bear with me while I take a break to do some cipherin' and try to get rid of most of my worst typos and unintentional misspellings.

Antarctic References?

The Antarctic is a pretty brutal environment for any kind of surveying. Good thermodynamic practice requires staring from a solid frame of reference and not making assumptions without seriously thinking about potential consequences. Averages can obscure important signals, but sometimes they are all you have to work with. When using averages, it is not a bad idea to triple check before announcing you have discovered cold fusion or catastrophic global cooling. With the polar vortex, ozone holes and every thing else, about the only constant reference temperature is space, the final frontier, at approximately 2.7 K degrees.

Atmospheric R values, that silly basic reference where temperature and energy flow have a linear relationship, might be useful. With an average surface temperature of 223K versus space, the R value would be 1.57 using the perfect black body emission of 140 Wm-2. This is just to be used as a reference for what should be happening between one point on the surface and space. Things will get complicated, so don't freak out.

If I used the new average surface temperature of 289.1K degrees in the latest Trenberth and Keihl cartoon, the R value to space would be 0.723 down from the 288K value of 0.731 in their older cartoon. The decrease in the R-value some might think indicates an increase in conductivity. That of course would be an assumption which can make an ass out of people. It is something to keep in the back of your mind though. To compare, the older K&T used 235/390 Wm-2 for 0.602 TOA emissivity and the newer 2009 cartoon uses 239/396 Wm-2 for a 0.604 TOA emissivity. Both are based on global averages so both should be considered carefully before leaping to conclusions.

There is a difference between emissivity and the R value, the emissivity only considers radiant flux, R value considers all energy flux. That is the main reason I use R values as a reference, imperfect as it may be.

Now back to concentrations.

190/1,000,000 is my choice for the base concentration of CO2. For the radiant impact we can assume the nitrogen and oxygen have little impact on emissivity. Not a bad assumption for the NOCTURNAL condition I recommended earlier. For conductivity, we would assume a base of 0.024 Wm-1.K-1 at STP for N2 and O2 which make up basically the rest of the atmosphere. CO2 has a non-linear impact on conductivity. At colder temperatures than STP, its conductivity increases to a peak value at -20 C or 253K. For the change in emissivity due to CO2, DeWitt assumes that only the change in concentration matters. I disagree, but we have to start somewhere, so for now that is the assumption.

The Properties of Carbon Dioxide list the Thermal Conductivity of CO2 as 0.086 W/m-1.K-1 at -50C or 223K degrees. Roughly the same as 293K (20C), so for initial comparison we can assume that conductive is also only dependent on concentration. At 190ppm, you have to go to five decimal places to see the CO2 impact of 0.02401 and at 280ppm that changes to a whopping 0.02402 which for most purposes would be negligible.

Obviously there is no reason to consider a change in conductivity since using the Arrhenius relationship, dF=5.35ln(280/190) = 2.1 Wm-2 of addition CO2 forcing of the 140Wm-2 emission from the surface. Assuming twice the forcing impact at the surface, 4.2Wm-2 for a surface increase in flux to 144.2 Wm-2, the surface temperature would increase to 224.5 K approximately. That new temperature and flux would change the -50C R value to -48.5 which would be 1.54 instead of 1.57, a 1.9% decrease which should be noted.

Assuming the current concentration of 370 is indicative of change from the maximum 280 ppm in the ice cores, conductivity would only increase to 0.024017 at 223K while the change in forcing, dF=5.35ln(370/190) = 3.56EWm-2, with the same assumptions as before, would produce an average surface flux of 147.1Wm-2 with an approximate average surface temperature of 225.7 degrees K. So if the Antarctic were its own little planet, nearly doubling the CO2 concentration would produce 2.7 degrees of warming. The new R value would be 1.534, a 2.3% decrease from the 190ppm value. The conductivity at this point would have increased to a whopping, 0.024023 Wm-1.K-1 which is an increase of only 0.05 percent.

This was all estimated assuming that the base conductivity was 0.024 and that only concentration mattered. The Antarctic would have warmed some where in the ballpark of 2.7 degrees with the rise in CO2 from 190ppm to 370ppm. The Vostok Ice Cores indicate about 8 degrees of temperature change from about 190 ppm to 280 ppm.

Vostok, where the ice core was drilled, has a range of temperatures from -21C to -89C. That is a fairly wide range of temperatures. In degrees K, that range is from 252K to 184K with a black body flux range from 228Wm-2 to 65Wm-2. For 7.1Wm-2 of additional forcing to produce about 8 degrees C change, the temperature would have to be lower than the -50C used in the estimates above. At the lower temperature, 184K @ 65Wm-2, and increase to 192K @ 77Wm-2 would be a 12Wm-2 increase for an 8 degree increase in temperature.

This brings to what are the proper assumptions?

The thermal conductivity of CO2 increases from 0.086 @ -50C to 0.115 @ -20C. The impact of an assumed concentration only dependent change in CO2 forcing increases as temperature decreases. Oxygen at low temperatures exhibits magnetic properties and Vostok is near the southern geomagnetic pole. At 197.5 K and 1 atmosphere, carbon dioxide can jump from gas to solid and back at will. Last but not least, at 184 K the relationship between thermal conductivity and electrical conductivity may be blurred in a magnetic field.

It would be nice if a simple change in concentration of a trace gas could answer all the questions. It doesn't quite look like it explains if the Vostek ice cores indicate global climate change or geomagnetic changes in the past, which may be partial drivers of climate change.

Since I was cruising the internet I ran across a post on Watts Up With That. I take most post with a grain of salt, this one did have an interesting graph on the Last Glacial Minimum (LGM) which I will see if it will post here as a link or photo.


Cool. The post is CO2 Sensitivity is Multi-Modal - All Bets are Off, by Ira Glickstein. I haven't checked out the post completely, but the general premiss agrees with my line of thinking. I do note that the Arctic part of the plot is not consistent with with what I would expect, which would be closer to -6 C on average with a great deal of regional fluctuation because of the Gulf Stream.

The interesting part is the southern temperate LGM temperature change which I think agrees with the expected temperature and CO2 relationship in the Vostek ice cores, not the Antarctic relationship which I pointed out above. O18 concentration in the Antarctic are unlikely to be produced locally, but transported from the southern temperate zone and southern tropical convective zone. How the ratios are amplified is the question, which I still think is due to the southern magnetic field fluctuations.

But before I wander too far astray, how about More on Let's Concentrate on Concentrations?

Saturday, December 17, 2011

Greenhouse Effect Building Block Experiment

Lots of people have set up experiments to prove or disprove the greenhouse effect. The fact is there is a greenhouse effect or atmospheric effect on surface temperature. I recommended an experiment a while back that no one has taken me up on.

It is really pretty simple. Stackable plastic cylinders with IR transparent faces and football type valves for adjusting gas composition and inserting thermal probes. If you want to spend big money, Pete's Plugs are sized for larger diameter probes and have a convenient caps to reduce leakage.

The concept is simple, build a number of test cylinders and test away. You can use three or more cylinders with various concentrations and temperature differentials. Change the order and retest. Amaze your friends! Prove to the world that the "greenhouse effect" does exist.

Actually, this is really not a bad test assembly. You can use a water bath as a source and an ice bath as a sink to test real world temperature ranges. Use LED light sources to adjust simulated solar visual spectrum forcing. Since all of the clear faces would have the same R values and refraction index, just changing the sequence of cylinders would correct for minor variations. Test with insulation and without. Be careful though, some layers might respond differently than you think :)

Oh, put your results online like a real climate scientist should.

Blog Science is Sneaking Up on the Answer

With all the confusion on the internet with too much and too biased and too inaccurate information, it is nice to see some bloggers and commentators gradually moving toward the solutions of complex problems. DeWitt Payne and Joel Shore are two of the denizens of the climate blog world that have taken on the task of proving instead of accepting theory.

In this comment on Dr Judith Curry's blog, DeWitt is highlighting the exact issue I have with the science. He determines the emissivity of carbon dioxide from the surface to the top of the atmosphere. While that is useful information, the impact of carbon dioxide on the surface is centered in the lower atmosphere. Where carbon dioxide's impact is greatest is also where conductive and convective energy transfer interact to amplify, positively or negatively, that impact.

In the lower atmosphere, the thermal coefficient of conductivity of CO2 and its mixed gas environment is on the same order of magnitude of its radiant properties. This relationship is non-linear in that conductive and radiant properties vary differently with temperature and that decreased temperature is inversely related between the two.

Hopefully, these guys can combine their work and start a technical post so we can get away from all the political BS.

Note: The emissivity of the surface of the Earth is approximately 0.996 due to water being nearly a perfect black body. The effective emissivity of the surface is approximately 0.825 based on the "benchmark" greenhouse effect. That effective emissivity should be on the order of 0.857. That difference appears to indicate the magnitude of conductive/radiant interaction at the surface, which I have not seen considered in the debate thus far.

This emissivity change requires us to Concentrate on Concentrations.

Friday, December 16, 2011

String Theory Gives Me a Headache

String theory is one approach to connect everything to everything. Nothing wrong with that, but I have trouble thinking in four dimensions much less ten. If you think string theorists are warped, you would be right. They pretty much have to be as a job requirement. They are the geek's geek.

In another post I mentioned disc models for wave propagation. Discs were used by Planck, Stefan, Boltzmann, Maxwell and most everyone in the day. That started because of a scientific challenge. I forgot the guy's name, but he challenged everyone to tell what was coming out of a hole in a furnace. A hole is pretty much a disc, so electromagnetic models are based on discs.

I don't have a problem with discs. Like a coin though, they have two sides. So a disc based radiant model should consider both sides of the coin. Once you take the disc out of the furnace wall, the infinite radiation source, you can apply conservation of energy to the disc model. That gives the flip side of the disc some meaning.

With the simple disc model, allowing for the flip side energy required, you get the classic 2 factor for the maximum interaction between discs. I went through that in the Fun with Radiant Disc Models. Which if anyone cares to decipher my scatter brain logic, kinda proves that the maximum impact of the atmospheric effect is two times the difference between the surface energy flux and the top of the atmosphere (TOA) energy flux. With roughly 390Wm-2 at the surface and 240Wm-2 at the TOA, 150Wm-2 is the difference 300Wm-2 would be the maximum.

I also mentioned that value is not necessarily a surface impact, since the atmosphere absorbs and reflects a portion of the energy. This should be obvious in my opinion, that the ratio of atmospheric to surface absorption matters and that there is a physical limit to the atmospheric effect. The Kiehl and Trenberth energy budgets screwed that up, so there is a great deal of confusion about what the atmosphere can and cannot do.

To get everyone on the same page, K&T has to go. Face it, it is just a cartoon, and while funny in a geeky kinda way, not very helpful. They are comparing combined flux impacts to just radiant energy theory and double counting some things and omitting others. That is a CF if you know what I mean. FUBAR for the guys with military experience.

So for all the wattabe redneck theoretical physicists out there, think about multi-disc radiant models and the 2 times thingy. Twice, half, about half, most, as in maybe greater than 50% and multiples of any of those, are all possible evidence in non-linear dynamic systems of changes in response to some variable. You can call that variable a forcing, a feed back or a thing-a-ma-jig.

For the title of the post, an infinitely long multi-disc model is like a string. In one dimension, it would be a dot, two and disc, three a cylinder and four a rope or string. Like I said, more that four gives me a headache, so I am going to try understanding the first three, then add the fourth. If there is enough Excedrin around, then I may ponder another. Right now, four should be enough to get a grip on the climate puzzle.

Thursday, December 15, 2011

Climate Change and Fishing Summit - Solar and Sensitivity

Dynamic systems are interesting critters. From personal experience in HVAC, systems can be described by system performance curves. Under certain conditions you can expect certain performance. A fan curve is a very simple representation of a dynamic system.

Ideally, a fan follows the rules so that more RPM means more CFM which produces a squared increase in static pressure requiring a cubed increase in break horsepower. That is the fan laws. Centrifugal airfoil fans are high efficiency fans that use airfoil shaped blades to take advantage on the same physics that allow airplanes to fly.

Not to pick on this fan design, they work great, but they tend to have unstable operating regions. Unstable operating regions are areas where very small changes in static pressure produces a much larger change in delivered airflow. In a simple installation, the fan speed or air distribution system can be adjusted for the correct flows and energy efficiency. With a little more complex system design, controlled air distribution devices, Variable Air Volume and Controlled Constant Volume devices adjust to maintain their desired adjustment changing the static pressure or the performance curve of the fan system. This can cause unwanted feedback creating strange and awe inspiring oscillations in the system where the ceiling start falling out, roofs blow off, doors and window slam shut and open. Generally, the customer is not particularly happy when this kinda stuff happens, but it can be entertaining. The point to take from that little real world example, is non-linear dynamics can make small changes cause big deals in a heartbeat.

Our climate system is a nonlinear dynamic system. As such, under certain situations, small changes, like solar variation, can make big changes.

The correlation of solar intensity to climate conditions is an indication of the impact of solar variation on climate. It is not a perfect correlation and it should not be in a non-linear dynamic system. There are numerous feedbacks that at different times would have different impacts. When a few of these feedbacks get together or synchronize, the impact is much greater than some would expect.

A good rule of thumb in a non-linear dynamic system is that any feedback can have twice its normal impact. Two feedbacks synchronized would also have twice their expected impact.

Climate Scientists seem to think that there is A climate sensitivity, which basically proves that they have no clue what they are doing. This is a sad reality, the guys in charge of saving the world from ourselves are clueless if they do not recognize the role of non-linear dynamics. Which is all too obvious, because increased CO2 changes the dynamics of the system, producing a shift in the range of sensitivity, not a specific new sensitivity. That is why the Antarctic is not warming and the Arctic is warming like a bitch.

Some call non-linear dynamic systems chaotic. Most of the more clueless climate scientists don't like the term chaotic, because it implies they placed their bets on the wrong theoretical horse. They did, if they are expecting any linearity in a climate system.

Wednesday, December 14, 2011

What the Heck is Actually Happening?

The Antarctic puzzle is a good one. Obviously, due to much lower surface temperatures, ice and near freezing sea water, radiant and conductive heat loss would be less that any other location on Earth. The Antarctic should gain heat from the tropics as the tropics warm. The increase in conductivity thanks to the non-linear properties of Carbon Dioxide would explain part of the lack of warming, by allowing more heat transfer from surface without transferring as much heat to the lower atmosphere. At the point were radiant transfer of energy becomes more significant than conductive, the atmosphere should warm more significantly. That appears to be the case even with the rather poor data. But what the heck is happening with the CO2 concentration of the atmosphere?

There can be increased absorption of CO2 by the Antarctic ocean, but it would seem that would be more variable. There is a hint that the geomagnetic field plays some role in the stabilization of both temperature and CO2 concentration, which brings me back to a chemical impact, likely CO2 and O3 getting together either on their own or in combination with some other molecule to do some magnetic/electric stimulated reaction.

In plasmas, CO2 and CH4 enhance conductivity. The same basic interaction could be happening under Antarctic conditions, but in more of a law of large numbers kinda way. So how would I approach determining the magnitude of a weak magnetic field enhancement of a chemical reaction in an environment with crap for quality data?

I hate time series analysis, but maybe the ozone concentration versus mid troposphere temperature may offer some clue for an approach? I still doubt the energy of the chemical reaction will be significant, but it may link to a better indication of thermal/non thermal flux interaction. Weird stuff is happening.