New Computer Fund

Tuesday, November 29, 2011

Thar be Monsters!

Web got is analysis posted on Dr. Curry's blog. Good job BTW with the energy well.

He hits the nail on the head with the latent heat of fusion boundary. The latent heat of evaporation should come along soon. Basically, it all boils down to available energy. The 390Wm-2 at 288K is the stumbling block. With 79Wm-2 average latent, the effective global average radiant energy is 311Wm-2 not 390Wm-2 which appears to not be properly considered. More CO2 forcing would increase the latent flux reducing the available energy by shifting the effective surface average radiant layer upward. Pretty simple really.

Oh, for the guys at RealClimate, it is not genius, it is common sense. I always thought there was a difference, maybe not.

Fishing for Confirmation, Amoung Other Things

Very nice day fishing yesterday. Jim and family from Michigan experienced a day in my back yard fishing for Spanish Mackerel mainly, with enough Snapper volunteering for dinner to make a good day. The one that got away remains unidentified, but Cobia is suspected.

Webhubtelescope, the pseudonym of one of the Curry blog denizens, is fishing for climate sensitivity and natural variability. He appears to be a good angler for those tasks. With the Antarctic ice cores he found a range for natural variability of ~0.2C. When I asked him if 0.4C was possible, he stated it was, but only at a 5% level of confidence :) He has a Schimittner Analysis here to check the estimated range of climate sensitivity.

He is now using the Greenland ice cores to do the same thing, but is yet to finish. When he is finished, he should find the range is at least twice the range he found for the Antarctic. If he had access to equatorial alpine cores, he would find a range lower than the Antarctic. If I am right, the root mean squared of the three changes would be the approximate global average, placing natural variability in the range of 0.3 to 0.4 C for the global or half, maybe a little more of the modern warming. But equatorial ice cores are pretty rare.

So Web will probably determine that the range of natural variability is 0.x and there is only a 5% that that range will be exceeded. Five percent of 100,000 is 5000, and 5% chance of that is 250. So with great confident, natural variability could only exceed the estimates determined by ice cores for about 250years. Very useful information :) Compare that to the typical frequencies of a non ergodic system and what would you get? So if there is only a 5% chance or so that we could have a climate like we are experiencing now, what are the alternatives?

Sunday, November 27, 2011

The Surface-Tropopause Connection

Previously I did a three disk model with 2T as the source and T as the sink, then inserted another disk. I was going to add another disk representing CO2, but my spreadsheet to calculate the CO2 spectrum at lower temperatures is being fought by my urge to nap frequently following typical seasonal gluttony.

Since I forced myself to stay awake for a football game that deserves a nap, I figured I would lay out a slightly different model using the average tropopause versus surface.

So T this time is 288K @ 390Wm-2 and the tropopause will be approximately 213K or - 60C at 117Wm-2, so that will be 213/288=.74T @ 0.3F. As before, this would be a night thing we don't have to deal with solar absorption in the atmosphere.

The surface disc in this case is massive compared to the Tropopause disc and the CO2 disc. If I insert the CO2 disc at some temperature between the source and sink, it will stabilize at a temperature and flux that should represent the atmospheric effect of CO2 to some degree. Note that the Tropopause that does exist is stabilized at 0.3F.

When I insert the CO2 disc, it absorbs from the source and returns to both source and sink equally. If the CO2 disc absorbs 117Wm-2 plus 3.7Wm-2, then its initial effective temperature would be 214.8K @ 120.7Wm-2, it would return half or 60.3Wm-2 which since source is massive would return the same amount. However, since the CO2 disc can absorb at a maximum, 120.7/390 or 30.9% of the broad source emission due to the returned flux, it will absorb 18.8Wm-2 returning 9.4Wm-2 the second time around. So, 120.7+60.7+18.8+2.9... so we will say it would approach 210Wm-2 for a round number. The flux from the source not absorbed by the CO2 spectrum would pass through to the Tropopause so there may be some slight change to the Tropopause temperature. The Tropopause is fairly stable, so just from grins we will assume its change is negligible for the moment.

If this happens to be the stable temperature of the CO2 disc, then it is at an equivalent temperature of 246.9K @ 210Wm-2 which is -26C. So the addition of 3.7Wm-2 worth of CO2 forcing would change the effective emissivity of some layer of the atmosphere from some temperature to approximately 246.9K degrees.

This is where the monster be! It is assumed that the 3.7Wm-2 additional forcing would produce approximately 1 to 1.2 degrees of surface warming. If this layer of the atmosphere were initially at 206.3Wm-2 (245.6K), the the add 3.7Wm-2 produced a 1.3K increase in the temperature of this layer. Since the source initially saw a sink at 117Wm-2 with a layer at 206.3Wm-2, and now sees that intermediate layer shifted to 210Wm-2, it is seeing 3.7Wm-2 which it politely returns in full from a different location. The location shifted from 245.6K to 246.9K, so what would really be the impact on the source? That would totally depend on the layers between this new CO2 effective radiant layer and the source.

The reason I use source is because CO2 can only absorb in its spectrum based on its temperature. The higher the temperature of this layer the greater the potential broadening of the spectrum of the CO2 layer. Layers below this temperature with spectra compatible with CO2, as in other CO2, overlapping water vapor and water in liquid or solid form, can return radiation from the new CO2 layer without as much impact on surface temperature.

So we end up with a variety of local sensitivities to a doubling of CO2 dependent on the radiant spectra of the atmospheric layers between the CO2 and the ultimate source of energy for the CO2 layer to return.

In the Antarctic, where there is little water of any phase in the air, CO2 is the main source of the energy to be returned. With the average temperature of both the source and the CO2 layer well below -26C, the energy available cannot produce the full 3.7Wm-2 anywhere near the surface. Also with CO2 being the primary source and return, the available spectrum is effectively filtered to the
co2 spectrum for any radiant interaction. This effect is noted by the lack of Antarctic temperature increase with increased CO2.

In the Tropics with the preponderance of water in all phases, the average altitude and therefore temperature of the radiant layers would be more greatly separated from the initial source at the surface. The additional CO2 would interact with clouds and water vapor near saturation producing upper level convection or acceleration of the rate of convection, which I believe has been noted.

Finally, in the Arctic region where surface energy is available, the increase in CO2 can interact with water and water vapor closer to the surface to have a noticeable impact on surface temperatures as advertised. Since available energy is required for CO2 enhanced return, changes in snow and ice cover greatly impact the CO2 enhancement of the "Greenhouse Effect". Which I believe is evident in the ice core and other paleo data in the northern hemisphere. This impact is highly dependent on internal variability of energy and moisture associated with natural weather circulations, increasing the amplitude of the temperature variability in these Arctic and near Arctic regions.

It should be noted that the much greater land use changes made in the past 500 to 1000 years, may have enhanced the atmospheric effect sans additional fossil fuel CO2, by reducing the expanse of snow fields which tend to sequester biological carbon. Draining swamps and marshes, building large urban areas, and in general doing the things civilized man does, would also impact the available energy and radiant gases to enhance the "Greenhouse Effect".

There are a wealth of sources that deserve mentioning for gleaned information used to write this little post. Any and everyone that ever published or thought of publishing anything that may have contributed to this post can request mention, complain that they were not mentioned or attempt to sue the crap out of me. I will try to eventually get around to providing sources. For now, I think I will just pester someone on some blog some where :)

For Lucia

Since it looks like fishing will get postponed until tomorrow, a little condensed version for Lucia may be in order.

The Kimoto Equation: dF/dT=4F/T is more elegant than you may think. The 4 is equivalent to using 255K as a reference radiant layer. You can check by using the full S-B 5.67e-8(255)^4-5.67e-8(254)^4 yields 3.74F per T. Exact no, it is an approximation, for more exact, 259K to 260K. In either case, it provides a reference flux to temperature relationship. Heavy emphasis on Reference.

From any other temperature, a variable needs to be included to balance the equation to that reference. Example: 390W/m-2 - 384.7 = 5.31Wm-2 or the change in Wm-2 from 287K to 288K. 4/5.31=0.753 or the approximate ideal emissivity between a point at 287K and a point at 259K. So 0.753F/T = 0.753(390)/288=1.02 meaning the flux at 259K point would increase by 2%. 1.02*255.14=260.2Wm-2 which has an equivalent temperature of 260.25K So a one degree change in surface temperature beginning at 288K would produce a 2% change in emissivity resulting in a 1.2 degree change at a radiant layer initially at 259K. 287K to 260K would of course produces slightly different results. It is an approximation after all.

This would appear to be the classic relationship used to predict that mid tropospheric temperatures would increase by 1.2 degrees per 1 degree change in surface temperature. If the average temperature of the radiant layer in the mid troposphere were 259K, that would be true. Unfortunately, -14.15 degrees C is limited globally as a radiant layer temperature, implying the Arrhenius relationship which appears to be based on 255K, is only valid for temperatures in the range of 255K to 260K for an average radiant layer temperature. If you referred to Arrhenius' 1896 paper, you will note that is estimates by latitude diverge from reality. Please not that in is final table, 0.67K and 1.5K are not temperatures, but changes in CO2 concentration. Assuming 280ppm was the average concentration at the time of his research, 0.67K would be approximately 187ppm and 1.5K would be approximately 420ppm. A quick test of the validity of the Arrhenius equation, since current CO2 levels are approximately 390ppm, would be to compare today's observed temperatures to Arrhenius' predicted temperatures by latitude.

Lucia, I find that the modified Kimoto equation can be a very valuable tool, but only if one realizes that Arrhenius' equation is erroneous since is does not include the require temperature dependance for the average effective radiant layer of the atmosphere in the real world. I believe Angstrom mentioned this to Arrhenius while the Galileo of Global Warming was pondering his role in the master race :)

Just in case Lucia survives the holidays and the skunk invasion.

Saturday, November 26, 2011

Fun with Physics - Toying with RealClimate

I love messing around with people. The worse sense of humor they have the more fun they are to pick on. Really, you need a sense of humor and inside jokes are often the best :)

The guys at Real Climate have very little in the way of a normal sense of humor. So I love picking on their fallen scientific Icon Arrhenius. He was much like they are, humorless, incorrect and over confident.

So when I do my posts, I prefer use things that humorless, incorrect and over-confident individuals will blow off because of some preconceived notion. Like the thermodynamic frame of reference.

The thermodynamic frame of reference is the most basic of basics in thermo. That is because it simplifies understanding of the problem be allowing a constant baseline on which other baselines can be formed. These new baselines are boundary layers that may or may not be physical boundaries. Anomalies between the base reference and the selected boundary are information providing clues to unknown phenomena or calculation brain farts. Brain farts, errors in theory or application, are often interestingly highlighted early in analysis if the proper frame of reference is selected. With an incorrectly selected frame of reference, novel thermodynamic relationships can result in Alice in Wonderland adventures which while entertaining are usually incorrect. Perpetual motion or mystery energies are typical Alice adventures.

The neat thing about the selected baselines is that they can be unitless or non physical as long as they are consistent with the initial frame of reference. Atmospheric R values which imply a delta T/delta F are unitless until defined. I can use degree K and F=Wm-2 but I could invent ThermZits per phonon as long as I am consistent. Then if in one state I have more ThermZits per phonon that another, there is a reason or clue to be explored. Since my ThermZits have some ambiguous unit, I can define them at the end of discovery or at anytime I wish.

Arrhenius, since his sense of humor was suspect, decided he needed a definitive answer. Being a know it all that assumed he was part of the master race, ambiguity was not part of his realization of the physical world. He had binomial thinking, yes and no, the real world includes maybe. He assumed that since he was such a genius, that what happened in his world, Scandinavia, had to happen in the same way in all worlds. So to him there is A climate sensitivity to CO2 due to RADIANT responses based on his world performance. This is why a real scientist should not make an Arrhenius out of himself :)

Much like Arrhenius, the RealClimate crew are dedicated to their theory which was yes and no, but now have more maybes than expected. The maybes are clues for the curious but, "within the confidence interval of the models" for those that have the utmost respect for the Arrheniuses of the world.

So by noting that Arrhenius may not have been the sharpest tack in the scientific box, that implies that the followers of Arrhenius' theory may not be as sharp as they consider themselves to be :)

So I am considering ThermZits per Furlong-grain as the new unit for the Atmospheric R values. Which will be based on reflected and unabsorbed photons instead of absorbed long wave photons.

Thursday, November 24, 2011

Weird Relationship?

While setting up a spreadsheet to compare the solar irradiance and temperatures for a few of our neighbors I hit one of those odd things that just drive you nuts.

What I was trying to do was compare the actual temperatures after adjusting for the weird 65Wm-2 oddity I noticed in the Antarctic. Since the rotation of the planets vary considerably, I was just going to use the lit face TSI minus the 65Wm-2 for the not lit face. So for the Earth, 1367.7 Wm-2 minus the 65 would be 1307.7Wm-2.

After plugging in the formula to convert surface flux to temperature, things looked pretty normal. Only I noticed that I had used albedo, the reflected energy instead of the absorbed.

1302.7*0.306=396.63 Wm-2, which I plugged into the S-B (396.63/5.67e-8)^.25 and got 289.33K as my surface temperature. Obviously, I had not expected the values to be so close to the actual. I had mistakenly used just the reflected portion of the TSI, not adjusted for geometry and not even adjusted for day/night. I was just setting up a baseline value for maximum surface temperature to compare with the other planets, but forgot to use (1-albedo) for the absorbed TSI. Not my first dumb mistake throwing a spreadsheet together. The numbers came out so close though, I thought I might use it sometimes just to mess with somebody.

Now if I had not subtracted the odd 65Wm-2, the numbers would have been 394.1Wm-2 and 291.8K degrees. Because of the geometry of a sphere, the 1367.7 would have been divided by 4 yielding 341.9Wm-2 TOA, then I could go through the standard PITA estimates, used the calculated average surface temperature and come up with nearly the same values.

Odd coincidence? I did the same thing with Mars and came up with a surface temperature of about nine K warmer than the blackbody temperature. If the oddity Flux on Mars were 145, then the temperatures would have matched using Mars albedo of 0.25. For Mercury, it doesn't work. With the moon I get an average surface temperature of 224K.

It is probably just a weird situation, but there are a few questions about just how elastic photon reflection might be. If nothing else, it may be fun for doing quick estimates of surface temperature with back of the envelope calculations.

I may go back to the Postma paper to see what he did though. He could have been close to something after all?

The Bond Albedo does take into account the geometry and there is a disagreement between the Mars references for Bond Albedo versus the visual albedo. This may be more than a coincidence. Venus of course is warmer than would be indicated by the solar irradiantion. I have theorized that is due to iso-conductivity at the surface atmosphere boundary so core energy is being more efficiently transferred to the surface and lower atmosphere. This quirk may be a quick way to do what ifs on the effective radius of the energy absorption layer.

The Amazing Case of Back Radiations

The Science of Doom is one of the better climate science blogs. The post are a good compromise between readable and technical. You should pay them a few hundred visits. One of their posts, "The Amazing Case of Back Radiation" (part I and II) is an attempt to reconcile the second law of thermodynamics with climate science. There is no need to reinvent the physics of the second law. One thing missing is that there is more to radiation than thermodynamics.

While pondering my 100K conundrum, it is becoming obvious that magnetic and electrical fields need to be considered at temperatures below 200K, with 184K range fairly interesting. Thermal and non-thermal radiant effect cross over in this region causing all sorts of weird and wondrous things to be possible.

One problem with The Science of Doom is they tend to defend instead of investigate. Science is about learning and teaching, they seem to be stuck in teaching mode, or defending ideology. That is a job for preachers and politicians, not scientists.

Just imagine what Max Planck would do with all the modern telemetry. You think he would defend his theories or cream his jeans (or tweed, or whatever he wore) and run around like a kid in a candy store? He would be going ape shit!

So I have absolutely no respect for teachers unwilling to learn. This world is our scientific oyster, let's shuck that baby!

BTW, While it is hard to determine what relationship is most significant, the Crookes Radiometer operates on a principal that can be analogous. The Tropopause and 2nd Law From and Energy Perspective is consistent with the interaction of thermal and non thermal flux impact in the Tropopause and Antarctic. On a planetary scale, the energy is significant, whether DWLR versus magnetic could be used commercially is questionable, but Piezoelectric radiant conversion may be viable. Just musing on implications.

Monday, November 21, 2011

The 100K Conumdrum

The Tropopause has peaked my curiosity for quite a while. I have thought of a few ways that the Tropopause could regulate surface temperature. The 100K boundary was not one of them.

With the three disk model, I am able to get a feel for the energy relationship between the surface and Tropopause. As I expected, the Tropopause appears to behave as a regulating energy conduit were energy is transferred rather quickly from higher to lower regions with the poles being the lower energy regions most of the time. The northern pole is unstable and since it is most isolated from the more stable southern pole, its surface has greater temperature fluctuation. That is a combined impact of variations between the tropopause and the surface that can be in or out of sequence vertically and horizontally. Land mass, the warm Gulf Stream current, and the change northern Pacific surface temperatures create a chaotic mix of temperature and pressure gradients.

The changes in snow/ice coverage which change the start and length of growing seasons produces inconstant changes in albedo and CO2 storage/release. The vast changes in land use just added to these dramatic changes which may completely or partially mask information that should be available in paleoclimate reconstructions.

The chaotic changes at the northern pole is interesting, but plays holy hell on attempting to isolate the true cause of the 100K boundary.

In the Antarctic region, evidence on the 100K boundary is most noticeable. With an average surface temperature of 224K @ 142Wm-2, the approximate flux value of the truly radiant portion of the atmospheric effect, the Antarctic is receiving the maximum benefit of the energy from the tropics on average. With the 100K @ 5.67Wm-2 I sort of expected the Stefan-Boltzmann equation to start falling apart. I may have, since the real boundary may be ~65Wm-2 or 184K degrees. The magnetic signature in the Antarctic thermal flux readings may be real or may be interference with the test instrumentation.

In the tropics and sub-tropics, deep convection pushes the 100K limit in the Tropopause. These rapid drops in temperature to near 100K are too short and too localized to be measured by the satellite data. Some balloon measurements to -95K have been recorded, but they also have issues with data quality. Based solely on the three disk radiant model, below approximately 224K @ 142Wm-2, the tropopause would be gaining energy from the lower stratosphere, which would not appear to have the thermal capacity to stabilize these events. That leaves magnetic or electrical energy from the Earth's core or solar electric as sources of the thermal energy. Energy is energy, so this is quite possible. Trying to figure out if, which and how much, though is not all that simple.

Based on the same three disk radiant ratio, 2:1, 71Wm-2 would be the average radiant layer between the Tropopause and the top of the atmosphere. This is close to the ~63Wm-2 but not enough to assume the same relationship holds when magnetic flux may be involved.

Interestingly, my minimum emissivity estimate of 0.99652825^(390-63) equals 0.32 or very close to the transmittance from the surface at 390Wm-2 average to the 100K boundary flux. Which could be absolutely nothing or an indication that temperature may not be the correct term below 100K. So it is time to research some of those goofy space radio spectra to see where to go from here.

Note: peak emission wavelength at 100K 28.977685 micron. Approximately 196K has peak wavelength of 14.7 micron, CO2 main absorption.

Non thermal versus Thermal http://www.haystack.mit.edu/edu/undergrad/materials/AJP_pratap&mcintosh.pdf

http://astronomy.swin.edu.au/sao/downloads/HET608-M03A02.pdf

http://en.wikipedia.org/wiki/Rayleigh%E2%80%93Jeans_law

http://en.wikipedia.org/wiki/Wien%27s_displacement_law

http://en.wikipedia.org/wiki/Sakuma%E2%80%93Hattori_equation

On the other front:


Since I am stuck for the moment: http://www.realclimate.org/index.php/archives/2007/06/a-saturated-gassy-argument/

This is an older post on Real Climate about the Arrhenius-Angstrom issue. Angstrom basically said that near the surface, CO2 is virtually saturated, i.e. a doubling of halving of CO2 would have little impact on surface temperature. Pierrehumbert and the author Spencer Weart note that convection moves surface temperature up in the atmosphere where CO2, with less competition with itself and water vapor could enhance the greenhouse effect. While that is true, the impact of the enhancement cannot easily be transferred to the surface due to the near saturation of CO2 noted by Angstrom. So Angstrom was right.

Weart and Pierrehumbert miss two of the main issues highlighted by Angstrom. The first that near saturation at the surface limits the radiant impact of CO2 at the surface. The second is more subtle, that the temperature of the CO2 limits the energy it can transfer to the layers of the atmosphere below its level of radiation.

Another issue not considered is the impact of CO2 on conduction of the atmospheric gases. Based on the conundrum above, there is also the possibility of the impact of the Earth's magnetic field on the properties of CO2 at low temperatures, approximate 100K degrees.

It should be simple to adjust models to compensate for the effects of near saturation and emission temperature. There does not appear to be much adjustment made for the conductive impact and no adjustment made for the magnetic impact. The Antarctic performance appears to imply that neither conductive nor magnetic influences should be considered negligible.

Update: When I get stuck I do tend to wander much more than normal which is totally abby normal for anyone to make sense out of, so bear with me.

The 100K boundary is ~5.67Wm-2 which is in the neighborhood of where I expected things to get fun. Things are getting strange at a flux of ~ 65Wm-2 or a temperature of roughly 184K degrees. Just about any scientist knows about the magnetic properties of oxygen and there is quite a bit of research on electric and magnetic interaction with O2 at low temperatures. That would mean that if this ~65 Wm-2 thing is a real phenomenon, there must be a pretty unusual mechanism. Thermal and non thermal flux cross over in this range, but astronomers are pretty good at telling the difference looking at distant objects. So things are getting back into sci-fi mode.

My probable best hypothesis is the Tropopause sink with a magnetic flux impact on the radiant transfer. That may make me totally certifiably a nut job, but it seems like a possibility given the odd circumstances. The ~65 is a rough match of the temperature/flux differences I would estimate for Venus and Mars. There are temperature inversions of -44C with surface temperatures of -74C in the Antarctic, which would be balanced by ~ 65Wm-2. The flux readings for the Antarctic are off by ~65Wm-2 in some areas which appear to show the southern magnetic flux field. Just no mechanism to support such a hair brained hypothesis. The center wavelength of 28.98, is just enough off from things to be a problem, but what if Ozone and CO2 in some form can hook up? The Ozone hole is not really a hole, just a significant reduction in concentration. The Arctic has its own hole forming at a different temperature and magnetic orientation. It may be crazy enough to pursue.

The Electromagnetic Fields and Climate Change

This was not something I really expected, but hey, it seems to have some substance. The radiant transfer of energy from a body to another body has two main limits, the energy of the bodies and the background energy of space. I would have thought that background energy would be related to the temperature of space, but it appears to be the energies in the low end of the electromagnetic spectrum.

While there is a lot of work to verify this situation, around 60 Wm-2 of energy flux we hit an unexpected to me thermal barrier. A spectrum with an average energy of 60Wm-2 is in the radio range. Based on the Kelvin temperature scale, a corresponding temperature of about 100K. Neat!

If true, this explains the oddities at the south pole. One being that the measured flux readings appeared to show the signature of the southern magnetic field.

Liquid oxygen has a temperature of around 90K and has magnetic properties. That would be a bit of a verification!

The minimum temperature I have seen for the tropopause with deep convective pluses is -95K lends more support to my Tropopause heat sink theory. See, the tropopause can get no colder than the energy above and below will allow. Stratospheric energy seems to impose this limit, 238Wm-2 divide by 2 is ~120Wm-2 or an equivalent temperature of 214K. This is getting kind of exciting :)

Sunday, November 20, 2011

It is so Complicated!!

Not really. It is funny though reading different people explaining why we should or shouldn't do this or that. So I thought I would add my silly solution of the week, lose weight!

The average human with a temperature of 300K emits approximately 464.5Wm-2. If the average human has a surface area of one meter, that would be 464.2 Watts per second per human. With about 7,000,000,000 humans on the planet that is 3.25e12W per second human contributed warming. The Earth has a surface area of 5.1e14m^2, so people emit, 0.00638Wm-2 from the surface of the Earth, mainly in the northern hemisphere. Obviously, obesity is contributing to global warming! So forget the compact fluorescent light bulbs, lose weight! Less human surface area means less global warming.

A Quick Summary to Date

As I have mentioned before, Greenhouse Theory is not my main thing. All the data gathered for studying the Greenhouse Effect may hold a lot more information than just the radiant impacts of CO2 and the rest of the little photon grabbers. So here is a quick summary for the guys really concerned with the Greenhouse Effect:

CO2 has a radiant impact at a given temperature of one half of the area of its absorption spectrum relative to the emission spectrum of the layers below that temperature. So if you integrate the area of the CO2 spectrum and you know the spectrum it is exposed to you can determine the impact. CO2 is basically a space blanket with a lot of holes in it.

If energy is to be conserved, the radiant impact of the atmosphere, all molecules, cannot exceed, twice the temperature of the surface or the energy input to the surface, which ever comes first. Since the surface average emission is 390Wm-2 @ 288K and the TOA emission is ~ 238Wm-2, 142Wm-2 is the energy limit of the warming (twice that value as both surface warming and Tropopause cooling balance, See next point). Since not all of that energy is emitted from the surface, only the portion emitted from the surface can impact the surface.

Since surface and atmospheric warming has to be balanced by equal cooling, the Tropopause temperature has to be depressed just as far below the average temperature of the atmosphere as the surface is raised above the average temperature of the atmosphere. One should note that the tropopause temperature is fairly stable. That is because horizontal radiant energy transfer within the Tropopause layer stabilizes the temperature to near the average cooling impact of the atmospheric effect.

Spectral broadening at higher temperatures due to collisional energy transfer is a conductive impact with radiant importance. Radiant impact and conductive impact cannot be considered separately.

Latent energy from the surface shifts the apparent temperature of the surface. Since the conductive impact below the radiant shift layer is not equal to the radiant impact above the latent shift layer, it complicates accurate calculation of the Greenhouse impact at the surface.

Finally, the impact of the Greenhouse Effect varies regionally due to all these conditions plus available surface energy, i.e. albedo and global circulation of internal energy. Have fun trying to figure that out :)

Saturday, November 19, 2011

The Tropopause and the 2nd Law From Energy Perspective.

In the Tropopause and Second law of Thermodynamics Paradox I discussed a simple dilemma using just temperatures between thermodynamic disks. Radiant energy isn’t as simple as temperature. There is a fourth power relationship between energy flux and temperature. The fourth power relationship creates an envelope of energy related to temperature of the radiant spectrum of an object at a given temperature with a given concentration of elements. Every element has a unique radiant spectrum and elements in combination extend that spectrum. A perfect black body would have a uniform spectrum with no gaps or missing spectral lines.

The Stefan-Boltzmann law, E=sigma(T^4) is the formula to determine the ideal energy intensity of an object at a given temperature. That energy is based on one side of a flat disk, not a sphere. In order to use on a sphere, the geometry of the sphere has to be considered which produces very accurate results for the observable face. The opposite face of the object has to be assumed to follow the same rules, which is perfectly fine if the object is sufficiently massive.

With gray bodies, less than perfect black bodies, those assumptions have to be applied which appear adequate, again if the object is sufficiently massive. Gray bodies with gaseous atmospheres do not always behave completely as expected. How much they deviate from expectations is a major issue.

As with the temperature only example linked above, the same problem can be posed with energy flux instead of temperature with the absorption/emission spectrum of the object considered.

If we let T be 150K, then the flux values for the problem using the full Stefan-Boltzmann constant, σ = 5.670373(21)×10−8 W m−2 K−4, yields F(2T)=459.3Wm-2, F(T)=28.7W/m-2. So in order to have 1/2 the temperature, the cooler disk would only have to absorb 28.7/459.3 or 0.22% of the radiation emitted by the warmer object.

When we place another object between the two, we determined its final temperature would be approximately 3/2T which would be 225K @ 145.3Wm-2. So the simple temperature ratio used is vastly different when using energy flux.

Since the warmer body for some instant in time will continue to emit at 459.3Wm-2 and the center disk only absorbs 145.3, 313Wm-2 would pass through the newly inserted object outside its absorption envelope. The 145.3Wm-2 the warmer object receives from the center disk would be toward the lower energy range of its emission spectrum. If the energy received is within the envelope of the warmer body spectrum, that energy would be absorbed. If not, that energy would pass through the warmer body just as energy outside of the emission spectrum of the cooler disks would pass through.

Since it is likely that the warmer body spectral envelope includes the envelope of the cooler body, we can assume that all the energy is absorbed. In such a case, the warmer body would return approximately one half of that energy, 72.6Wm-2 would be returned to the center disk. It would be incorrect to assume that all of this energy would be absorbed, since the spectral envelop of the center disk is much smaller than the warmer disk.

If the warmer body emits the newly absorbed energy uniformly across its spectrum, the center disk would absorb on the order of 72.6/459.3 or 15.8 percent of that radiation, 13.5Wm-2. We would have a series based on 15.8 percent absorbed which is based on the original spectral envelope of the center disk.

72.6+13.5+1.9+0.36+0.11… or approximately 90Wm-2. The envelope of the center disk would of course increase with temperature changing this relationship, but the law of conservation of energy has to also be considered.

The total energy of the warmer body cannot increase more than the decrease in energy of the other two disks. So the opposite face of the warmer body must decrease as the common face increases.

That changes our series by 50 percent or instead of 90Wm-2 to 45Wm-2.
So the warmer disk would increase to approximately 504.3Wm-2 while its opposite face decreased 45Wm-2 to 414.3Wm-2. The total apparent energy of that object would become 918.6Wm-2 which would have an average apparent temperature of 459.3Wm-2 or 300K degrees. The apparent temperature of the common face would be 307.1 degrees K

What I propose is that is the condition of the atmospheric effect prior to the addition of more carbon dioxide at a location on the surface with an apparent temperature of 307 degrees. Or that the radiant portion of the atmospheric effect is approximate 7K degrees at regions with surface temperature of 307K degrees and that the impact would vary with the surface temperature and the effective radiant layer of the atmosphere. In another post I will add a new disk representing carbon dioxide.

Note: This is a little different that Kirchhoff's law. With longwave radiation there is essentially no reflection. What is not absorbed passes through the disk or wall, and of what is absorbed, 50% is returned instead of reflected. The amount returned is increased by the number of disks returning 50% of their absorbed which either passes through or is absorbed returning 50% and the amount absorbed is dependent of the absorption spectrum, i.e. temperature and/or composition of the disk.

The Tropopause and the Second Law of Thermodynamics Paradox

The Tropopause and the Second Law of Thermodynamics Paradox

Most of the relationships developed for radiant physics are based on disks. Flat surfaces with one side analyzed. This started with an original problem of an oven with an observation hole. The inside of the oven was at a constant temperature and the observer determined the characteristics of the radiation leaving the hole. This resulted in a vast improvement in our knowledge of electromagnetic radiation.
Today, we are dealing with radiant physics problem using that same hole in an oven. Not that that data is wrong in any way, we just may not have the same oven or the same hole. When two objects in ideal space are used for example, typically we use spheres. Then relate the hole or disk to a sphere, use the classic relationships and call it good. That may not provide all the accuracy needed.

Consider two disks in a vacuum:

Disk one is at some temperature and disk two is at another. The disks have a common face, or one is facing the other, and an opposite face, facing away from the other. The warmer disk is at a temperature 2T and the cooler at a temperature T as observed from between the two disks.

We insert a disk between these two. The new disk is at absolute zero with an area equal the two original disks and less mass than either of the original disks. Now we observe what happens.

Since the new disk is at absolute zero, initially it receives energy proportional to 2T on one side and T on the other. Since it initially has no energy, i.e. is at absolute zero, the new disk does not emit any radiation.

As the new disk slowly gains energy, it begins to return energy to the original disks in some proportion to the energy it acquires. Eventually, the new disk will reach equilibrium with the original disks.

Since the new disk does not have an energy source, it cannot add energy to the system, only impact the distribution of energy between the two original disks. Since the 2T disk has more energy, the impact on that disk will not be the same as the impact on the lower energy disk originally at temperature T.

The new disk will absorb at 2T and return half or T to that disk. It will absorb at T and return half or T/2 to that disk. The new disk will pass on the opposite face, T to the cooler disk and T/2 to the warmer disk. The energy passing through the new disk is now 3/2T, its apparent temperature.

The original warmer disk sees a warmer disk than before and the cooler disk sees a disk cooler than before. So the original disks apparent temperature will have to adjust to this new condition.

If the total energy of the system is conserved, then the cooler disk which is receiving 1/2T worth less energy , will reduce by 1/2 T. So it will emit 1/4 T less on both faces. The warmer disk which is being returned 1/2T more. would emit 1/4 T more on each face.

Now we have a paradox. While energy is conserved, the opposite face of the cooler disk may not be able to emit less energy or the opposite face of the warmer disk may not be able to emit more energy. There may exist constraints outside of the three disk system.

If the system of disks are in a vacuum at absolute zero, the cooler face of the cooler disk was at equilibrium with zero energy. It cannot absorb or gain energy from a source with no energy. That means than the warmer disk cannot release more energy, since if the cooler disk cannot emit less, energy is not conserved.
In order to conserve energy the opposite face of the warmer disk must emit less energy. That implies that the warmer disks opposite face can absorb more energy or the apparent temperature of all disk face would have to decrease proportionally to conserve the energy of the system.

An interesting situation, it is kinda like the convective rule of radiant energy.

Thursday, November 17, 2011

Simple Thought Experiment for the Climate Change Gurus

UPDATE: I think I found not only an error in a calculation below but possibly an interesting reason for the error.


You have two flat disks filled with CO2. The containers, or shell of the disks, are transparent to all electromagnetic wavelengths and assumed to have no impact of the skin temperature of the disks. The two disks are placed in a vacuum. One at temperature T and the other at temperature 0.75T. What would be the radiant emission between the two disks?

Since no concentration is given, the only relationship available is the Stefan-Boltzmann equation, 5.67e-8(T)^4 for the disk at T and 5.67e-8(0.75T)^4. If we think in terms of the ratio of energy transfer, then R(T/0.75T)=T^4/0.75T^4 which we could reduce to 1/0.75 = 1.33 for R in one direction and 0.75 for R in the direction of the cooler to the warmer disk.

That is about as simple as math gets, a ratio.

If the T is 288, then 0.75T would be 216, the temperature impact of T on 0.75T would be 1.33 something. The temperature impact of 0.7T on T would 0.75 or less something. The net impact of the temperature of T returned to T from 0.75T would be what? Unity? I don't think so.

The temperature of the disk at T is 1.33 greater than the second disk, but the second disk can only return 50% of the energy it receives from the warmer disk. So 1.33/2 or 0.665 would be returned, but the temperature of the cooler disk limits the energy it can transfer. Also, the warmer disk is in a vacuum, it emits in both directions so only one half of the impact of T would be transmitted to the cooler disk from the energy it receives from the cooler disk.

1.33 emitted while 0.75 is received initially. This one of the conundrums of black body radiation. The flux for the two disks would be 390Wm-2 for the warmer and 123Wm-2 the cooler. At time zero, the total emission of the warmer would be 390+.5*123 = 451Wm-2 and for the cooler disk 123+.5*390=318Wm-2, so in effect, T would actually be 298K and the cooler would actually be 274K degrees.

It should be obvious, that the warmer body may be warmer due to the cooler body, but the total energy of both bodies cannot increase simultaneously. So how is conservation of energy maintained?

The net radiation from the warmer to the cooler is 390-123=267 Half of that is returned, 133 of which half is radiated to the other side. So the initial impact of placing the two bodies together in a vacuum would be that the warmer body energy would increase by 1/2 the net energy it transferred to the cooler body. The cooler body energy would increase by the net energy received by the warmer body, and the two bodies would rapidly decrease to the total energy of the two bodies initially in the same ratio. Since the mass and thermal capacity of the two bodies are unknown. then final values may not be accurately determined. A clue to is the difference in the original energy flux and the returned energy flux. The return is 133Wm-2 or 10 Wm-2 more than the initial value of the cooler body. Once the situation stabilizes, the warmer body would increase to a temperature equivalent to 400Wm-2, and the cooler body, emitting from both faces would increase by 20Wm-2 to 143Wm-2. The warmer body would increase to 289.8K and the cooler to 224K degrees, on the warm faces of the disks. Since energy must be conserved, the cooler faces of the disk would have to decrease until the original energy balance is restored.

While that may have been hard to follow, the total energy remains the same. The cooler body gains energy and the warmer body loses energy if there is no other energy source applied.

For the temperature impact, (289.8-288)/288 = 1.8K or 0.625 % increase in the warmer body on the warmer side and (224-216)/216=8K or 3.7% increase in the cooler body's warm side.

As a rough estimate, 1.8 degrees of warming at a surface of temperature T would be expected if we added a body at 0.75T of the surface temperature in a vacuum. We don't live in a vacuum.

There is one relationship that may be useful, 8:1.8 and 20:10 for 0.75T, due to 50% of the returned minus the initial net flux.

Since CO2 filled perfect disks are cheap in thought experiment world, let us insert a new disk between the two which are now somewhat stabilized but in no way completely described, i.e. no thermal mass or exact energy value. The new disk is at 0 K degrees. That would be a pretty expensive disk if it were real.

The new disk is receiving radiant energy from the warmer surface of the warm disk with an apparent temperature of 289.8K and on the other side it is receiving energy from the warm side of the cooler disk with an apparent temperature of 224K degrees.

At 0.0K, the new disk has no energy to radiant, it can only absorb. The total energy of all three disks combined cannot increase, I am too cheap to think up a power source, so the three will eventually reach equilibrium sharing the total available energy.

Now it would get to be fun. Since the first two disk are in some approximation of equilibrium, you can assume that there is a mass relationship between the two original disks. The apparent mass of the warmer would be on the order of 8:1.8 times greater and the apparent energy of the warmer would be 20:10 greater with respect to the interacting disk faces. With nearly 8 time the apparent mass, the actual energy would be greater than 2 to 1, but the apparent energy is only 2 to 1. That is because there are other sides to both disks not restricted in the same manners as the interactive faces.

Now we have a new disk that would not have that luxury. Its energy is completely dependent on two unknown thermal masses.

Eventually, the new disk will return to the warmer some value between, 133 and 123 t the warmer disk and some value between 133 and 200 to the cooler disk. This means that the new disk will eventually be cooler than the warmer and warmer than the cooler, but the apparent temperature of the warmer must decrease if energy is to be conserved. Since the new disk is absorbing energy that was warming the cooler, the apparent temperature of the cooler would also decrease.

If the mass of the new disk were large with respect to the warmer, the energy of the new system would remain the same for some moment in time, but the temperature of the system would approach 0, the temperature of the new disk.

If the mass of the new disk were negligibly small with respect to both disks, the apparent temperature of the new disk faces would approach the temperature of the adjacent original disk faces.

What if the mass of the new disk was the same as the cooler disk? The temperature of both the warmer and the cooler would decrease to warm the new disk, the temperature of the warmer would decrease at a ratio of 1:8 with respect to the original cooler disk. The temperature of the new disk would increase 20:10 or 2:1 with respect to the original cooler disk. Or the new disk would have twice the energy as the cooler disk since it would transfer only half the energy it receives from the warmer disk. Eventually, the new disk would be exactly the temperature of the original cooler disk less half of the energy transfer from the new disk to the cooler disk which would be 123/2 at a maximum. The maximum temperature of the original cooler disk would be 181K, or the temperature equivalent to a flux of 61.5Wm-2, the maximum of the new disk 224K or the equivalent flux of 142Wm-2 and the maximum of the original warmer would be 288K or the equivalent 390Wm-2.

So an object in space can cause the apparent temperature of another to be warmer, but it does not increase the energy of the warmer object.

Now with these rough ratios, 8:1.8, 2:1, what happens when you add energy to both the warmer disk and the new disk at a ratio of 2.85:1?

NOTE: I will be back to answer, find any mistakes and typos, but those interested may find something worthy of considering.

Before answering that question, consider what happened between the three disks. The warmer disk appears warmer on the face toward the cooler disk(s). Did the warmer disk gain energy? Yes, but at the expense of its losing energy, It appears warmer, but the total energy of that disk did not increase significantly. Half of the energy it gained is emitted on the other face of the disk. Once stable, the total of the energy emitted from both sides will equal the total emitted before the cooler disk(s) were introduced. But how can that be if the side facing the cooler disk is emitting more and the side away is emitting 1/4 more, (half returned from the warmer is half returned to the cooler and so on)? Because of emissivity.

There are perfect black bodies, but only in perfect space. Because the cooler disk can only return half and a perfect black body at absolute zero would not return any.

Space has a temperature of about 3K degrees, it is not a perfect sink for radiant energy so there is no observable black body. Only in a perfect vacuum can the paradox of a cooler body warming a warmer body exist. Space is as close to a perfect vacuum I have heard of so far, so we have to model perfection. If we lose sight of the fact that a model is not reality we have screwed up.

So what if we made the disk we inserted 3K instead of absolute zero? The results would be the same, we would just need more significant digits to see any difference.

If you make the warmer body infinitely large so it had infinite energy with respect to the cooler body at 3K degrees, you would find that the emissivity of the infinitely large body would decrease slightly. Its emissivity is dependent of the sink of its energy.

We can see that by adding a disk of equal temperature to the infinitely large disk, Each emit at the same temperature, but the less massive can only return half of the energy it receives from the more massive.

For convenience we will say both start at 1Wm-2 emitted on the common faces. Each emit 1 and receives 1 initially. Then the less massive continues to receive 1, but retains only 0.5 which it attempts to return to the more massive.

1+1/2+1/4+1/8 ... 1/infinity = 2 But once we add the third disk? Then a fourth?

The 2:1 ratio continues indefinitely as long as the initial disk is infinite. Anything less than infinite does not return 100% of the energy flux.

So without an energy source, 2:1 is the maximum ratio and for T to .75T, 1.8:1 is the maximum ratio.

So what does this mean as far as global warming? The atmosphere is effectively a number of layers of disk without and internal source of energy. Since the average energy absorbed by the Earth's surface and lower atmosphere produces a surface temperature of 288K without the atmosphere, if the surface would be 255K, then the maximum temperature difference due to the atmosphere would be 66K degrees. With the tropopause at approximately -59 degrees and the surface at plus 15 degrees the temperature spread is 74 degrees. If we consider only the surface absorption at its daytime average, (174*2 or 348Wm-2 for an effective temperature of 280K degrees), then the tropopause difference is 66 degree or twice 33. We are near the maximum atmospheric effect without a change in solar forcing or a change in the ratio of atmospheric to surface solar absorption.

This poses an interesting possibility. Land use impact on albedo, the amount of solar reflected, potentially has more impact than the addition of CO2 to the atmosphere. Since the atmospheric effect is more based on surface temperature relative to the temperature of the tropopause, than one would expect, Urban Heat Island potentially is more of a factor than understood.

This of course just a thought experiment. I may change focus on which mitigation strategy is best, if there is any merit though. I may clean this up if I get around to it.

Continuing with the experiment. The 2:1 ratio is simple to explain. The basic model for radiant transfer is a disk or hole in in furnace. The observable face gets all the press. The opposite face gets a mention here and there.

While we don't know the quantum of energy in either disk, we do know that the disks all initial radiate the same energy flux from both faces. So twice the energy flux of the disk, both sides considered, is a measure of the total energy of the disk. I used the term Quantum, because I am going to call 2T an energy Quantum with the working term ThermZit (TZ) :)

Planck and all the old masters pondered what the quantum was, Hey, its a TZ! :) Seriously, 2T is a good working value. If you take two disks at the same T and place them together, no heat is is exchanged, but the adjacent faces would approach 2T flux, while the opposite faces approach zero flux. Once the system stabilizes, the opposite faces would approach T flux and the true flux of the adjacent faces, the net, would be T as well. The apparent flux of the adjacent faces would depend on your frame of reference. With an infinite number of disks, you would see a bar at temperature T radiating from all faces at TZ/2.

But if the original faces start at T/2, how can the new cylindrical wall of the bar also be at T/2? Elementary my dear Watson, the total energy of the bar is the number of disks times a ThermZit per disk!

If we change the initial disks to T and T/2, energy is transferred from the T disk to the T/2 disk. The total energy 1.5 TZ and will remain constant until energy is lost to the vacuum, which is still perfect, so I am not letting that happen. If no energy is lost or gained, the T/2 disk can only increase to T and will stabilize at (T-T/2)/2 plus its initial T/2 yielding 3T/4 or 0.75T, which is the reason my initial disks were T and 0.75T.


Hang on! I've got an error here I think is due to rounding, but I have to check.


As I mentioned before, 1+1/2+1/4+..1/infinity approaches 2. But since what is actually happening is 1+(1/2+1/4)+(1/4+1/8)+...the real solution would approach infinity which would violate the 2nd law of thermodynamics. So some factor is required between each disk to make the solution approach 2. That factor is 0.996528 to six significant digits. That is the effective emissivity between any two layers of radiant heat transfer. That would be approximately an ideal black body upon observation. It is the relativity thing. this is why examples of two balls in a vacuum appear to be violating the 2nd law, because if this elemental unit of emissivity is not considered they do. Small things matter!

Since space has a temperature of approximately 3K and the TZ factor is in terms of T in units K, then space would have and effective emissivity of 0.99652825^3 or 0.9896208 approximately. This is a touch different than the average black body emissivity noted in the more exact use of the Stefan-Boltzmann relationship of j<>0.924sigma T^{4}, Whether this is actually a more accurate approximation will require a little more research. As it is, it appears to be an accurate enough base line to estimate ideal emissivity between temperature layers in the atmosphere. That explanation, why I was confident that my estimates were sufficiently accurate for use in the the Modified Kimoto Equation, has been a bit of a stumbling block.

Tuesday, November 15, 2011

Thermodynamic Layer Convergence and the Antarctic Ocean Heat Content

Thermodynamic Layer Convergence and the Antarctic Ocean Heat Content

Since there are still people skeptical of my understanding of thermodynamics and the impact of conductivity of the air on climate, I am giving a little preview of my anticipated results.

In the Antarctic ocean there is a convergence of the ocean 4C boundary and the atmospheric -20 C maximum conductivity boundary. The 4C is a temperature/density boundary and the Maximum Conductive Boundary (MCB) which I may have coined, is a quirk of the conductive properties of carbon dioxide.

As salt water approaches the temperature of 4C, its density approaches its maximum. The more dense salt water sinks leaving behind slightly less saline water which can eventually freeze near -1 degree C. This creates the deep ocean current originating in the Antarctic.

When the air temperature approaches -20 degrees C its conductivity or coefficient of thermal conductivity, remains stable to slightly increases. Typically, thermal conductivity of the atmosphere would decrease with a decrease in temperature. Because of CO2, that decrease is less and with more CO2 in the atmosphere, the MCD would increase.

At the point where they converge, boundary layer interaction maintains a relatively constant heat flux which increase with local thermodynamic conditions.

The typical ocean:atmosphere Thermal Conductivity ratio is 1000:1, meaning the ocean can transfer 1000 times more heat to the atmosphere than the reverse. The convergence of the 4C and MCB layers maximize the coefficient of heat transfer in this region. The typical limiting factor is the thermal conductivity of the air side of the heat exchange. Obviously, small changes in the air side conductivity can make larger changes in the transfer of heat content.

See Radiation versus Conduction.

Monday, November 14, 2011

Why not Use Urban Heat Islans Effect as a Tool?

While I am training my dumb model to not be so dumb, I was going to compare some temperatures to verify parts. I have major issues with the temperature data sets thanks to my computer issues, so I don't do that very often.

One of the neat things about the model is that it takes advantage of all the temperature data available and there is lots of that. Knowing the approximate relationships you can fine tune base or reference values to determine flux anomalies based on the temperature anomalies. Pretty straight forward.

One of the things on my Christmas wish list would be areal temperatures for specific locations in regions. You can only go so far with global and regional relationships. Urban Heat Island (UHI) is one of the best ways to fine tune flux changes because of the much great nocturnal temperature ranges. Since the CO2 induced radiation is a factor of 3 or more time the temperature change, large cities are a perfect test stand for flux calibration. Also since these larger areas have plenty of weather stations, the is much more data for normally a much longer time period. Oddly, I have not seen any studies published that compared UHI surface, lower troposphere, mid-troposphere and lower stratosphere data in any productive manner. Only the constant bickering about how the data sets should be adjusted.

That is the problem with adjustments to data. The raw data is full of information ready to be teased out. So if any of you time series geeks pop by, think about doing something productive for a change. Do me a Bucky surface in layers above a big humid city and a big dry city so we can calibrate the model for those area and see what happens.

Peer Review of On the Influence of Carbonic Acid in the Air on the Temperature of the Ground..

On the Influence of Carbonic Acid in the Air on the Temperature of the Ground, Svante Arrhenius, Philosophical Magazine and Journal of Science, Series 5, Volume 41, April 1896, pages 237-276 now in the public domain, available online at the Royal Society of Chemistry.

This publication was little noticed in its day but has become a center of controversy as the potential impact of Carbon Dioxide, referred to Carbonic Acid in the paper, could have on the surface temperature of the Earth. It is referenced in numerous papers, blogs and the greenhouse equations still include the basic natural log relationship determined by Arrhenius. At issue is if that relationship, that appears to be based on an average atmospheric radiant layer at a temperature of 255K is relevant.

G. S. Chandler, in a latter study determined that a doubling of CO2 would cause approximately 2 degrees per doubling and estimate early 20th century warming to mainly natural(greater than 50%) as of 1934.

The central issue would be that the effective radiant layer would vary with regional temperature and therefore the impact of carbon dioxide would vary with region. The basic natural log relationship does not considered adequately the variation in source temperature.

In order to properly evaluate the performance of climate models, a better understanding of the physical process of the atmospheric effect, not just Carbon Dioxide, needs to achieved. Where the average radiant layer impacted by change in Carbon Dioxide concentration is colder (higher in the atmosphere) that the average layer of water vapor condensation, the Carbon Dioxide impact on the surface would be muted.

A proper physical explanation of the potential impact of CO2 greater than 2C per doubling has not been published. Since modeled values are diverging from observations, it appears past due that a study based on physical principles be published. The first step toward that goal would be to determine what if any validity there is to the above Arrhenius published paper.

Forget Radiation Versus Conduction - You Can't Even Figure Radiation

Okay class, here is a simple radiant physic problem.



You have four layers. layer 1, emissivity = 0.9965, layer 2, emissivity = 0.8570, layer 3, emissivity equals 0.7091, and layer 4, emissivity = 0.6092 Calculate the flux from layer 1 at 390Wm-2. to layer 4. Now do the same thing with only 2 significant digits. How's that confidence in your theory hanging in there?



Ready?


Two significant digits instead of 4 is half the warming.

Completely Missed in the Climate Models!

I had to laugh this morning. Watt's up With That has a link to a article on pollution and climate. It has this great quote in it,

According to climate scientist Steve Ghan of the Pacific Northwest National Laboratory, “This work confirms what previous cloud modeling studies had suggested, that although clouds are influenced by many factors, increasing aerosols enhance the variability of precipitation, suppressing it when precipitation is light and intensifying it when it is strong. This complex influence is completely missing from climate models, casting doubt on their ability to simulate the response of precipitation to changes in aerosol pollution.”

Why is this "complex influence completely missed...?" Because the model don't consider the latent shift boundary layer. The model don't do very well considering any of the boundary layers. Thermodynamically, the models sucks.

That is why I am doing the Building a Better Model, post. Of course that processes starts at the very basic of the basics, emissivity.

The reason I used temperatures to calculate emissivity is to verify that while there can be perfect black bodies, there can never be a perfect black body emission. Why? Because there is nothing perfect in the universe, even space has a temperature. That means that there will never be perfect transmission of energy and that power series calculations that "prove" the value of down welling longwave radiation are wrong. period. end of conversation. They are wrong. Get over it.

Model are imperfect and more than just imperfect when they assume perfection, they are wrong. The temperature of the tropopause will prove that the radiant impact of the atmosphere cannot be greater than the TOA outgoing long wave radiation. Is that simple enough for you?

Oh, never mind. I keep forgetting that liberal arts majors are the new physicists.

Sunday, November 13, 2011

Building a Better Model

Just about everything I have looked at recently is because I think I can build a better atmospheric model. Two dimensional modules are kernels in the models have their place, but unless the areas used in the two dimensional representations somewhat match the actual three dimensional relationships, they will fall apart early. Assigning the right areas for may not be that complicated as long as there are reasonable checks to determine the expected degree of error.

Balancing forces, which to me seems likely, would be part of the system of checks. In local equilibrium, that term that frightens so many, forces would be balance in all directions.

For example; some degree of surface warming would be balanced by some degree of atmospheric warming which would balance the degree of apparent TOA emission.

Using the assumption that the Earth is 30 degree warmer due to our atmosphere, 15 degrees of that warming could be balanced by 15 degrees of atmospheric warming balanced by 15 degrees of emission at the initial application of the thermal energies. One would feedback on the other dependent on the properties of the media which would change with the changes in energy flux. Since the surface can be assumed to have a one dimensional impact, i.e. it would radiate to space other than upwardly, it would be my choice of a frame of reference in the thermodynamic sense of the term.

While this choice of 30 degrees is arbitrary, properly considering the properties of the media, air decreasing in density and space with a relatively constant transmittance, the final values should approach reality.

The atmospheric initial value of 15 degrees, would be felt in all directions. In a flat plate, the horizontal components would balance. As the plate curved to match a segment of the spherical shell is represents, the horizontal components would begin to need consideration. That point would depend on the allowable magnitude of error. By allowing the area of the atmospheric segment to increase proportionally with the separation of that segment from its corresponding surface segment, the utility of the model should be extend somewhat.

Since the initial transmittance are unknown, an approximation of one layer would approach a value in another layer causing the next layer to approach its resultant value. Only when all layers properly balanced is the accurate value of the transmittance discovered.

For an initial approximation of 50% transmittance, 7.5K would be felt at the surface due to the initial 15 degrees in the atmosphere. Half of that returned to the atmospheric layer which would pass half and return half of that impact from the surface. This would be consistent with the Planck relationship. (add details as link)

In order for this 15C increase in atmospheric temperature to result in a final 30C impact on the surface, the transmittance would be approximately, 0.99652825. If I rounded that value to 0.9965, the final temperature would be 30.1684 or an error of approximately 0.2 degrees.

Since the estimates of the atmospheric effect vary somewhat, if we use the current estimate of 288K at the surface and the S-B equivalent temperature at the TOA of 254.5 @ 238 Wm-2 resulting in 33.5 C of with 16.75 would be one half, using the same approximation of transmittance, 0.9965, the resultant surface temperature would be 33.689 degrees instead of the more accurate 33.50001, using the full estimated 0.99652825. And people would why there are so many significant digits in the Planck constant?

Since temperatures were used and not flux values, that approximation, 0.99652825 would be the emissivity of the surface to a point in the atmosphere that could be approximated as the average source of the atmospheric effect.

If you are following along at home, perhaps you would like to use flux values?

Update: No, this is not the emissivity from the 15C source in the atmosphere to the surface. This is what the surface emissivity would be to balance a 15C source in the atmosphere if the emissivity between layers was constant. It is not constant, but the surface emissivity has to be considered in order to accurately calculate the effective emissivity between points in the atmosphere. I am having issues with the blog comments thing at the moment.



So where does this leave us? First the surface has an emissivity less than one. With an atmosphere, the return of the atmospheric absorption would have to follow the rules, no more than half of the absorbed radiation can be returned to the source assuming flat plates as the source. For a spherical source, no more than one quarter could be returned. And for interaction between atmospheric layers, each layer can return no more than half of its absorbed energy to the source layer.

Possibilities? Since the temperature thought experiment produced results remarkable similar to the actual average surface emissivity, a model based on temperature relationships between layers, the Atmospheric R values for example, should properly consider both conductive and radiant flux impacts. That may require a more in depth proof, but it agrees well with the modified Kimoto equation results.

https://docs.google.com/spreadsheet/ccc?key=0AqLGErXDPyPFdEpOQmFvcDA2dXIzaWtxRHhETDNGVGc

This is a spread sheet I call the Dumb model. It still needs work, but since pooh pooh occurs, I am putting it online just to save what I have so far.

Of course, uploading to Google Documents changed the formats and I have caught one error so far from the translation from xls to Google.

Once I make sure those and my own silly mistakes are corrected. I can balance the layers, this night only by the way, to determine as exactly as possible what the tropopause average temperature should be. How? The difference in the effective radiating temperature of the Tropopause less the TOA flux should exactly equal the atmospheric window flux at that point in the atmosphere, which should be exactly equal to the atmospheric radiant effect flux. I am not sure if that is a bold statement or obvious, but I would expect it to be bold for those locked into the conventional wisdom. :)

Saturday, November 12, 2011

Radiation Versus Conductivity


I downloaded a spreadsheet from the University of Virginia which had this plot which I hope came out okay after the transitions I had to make to post it above. It is the relationship between a blackbody at temperature T and T/2. The fourth power relationship with temperature is rather striking.

So since I am blogging away about how the conductivity in Antarctica was not well considered in the Global Warming theory, it seems people have absolutely no clue what I am talking about. Basically, the big tall curve is the Equator and the small hard to see one is Antarctica. Now do you think that CO2 might have a bigger impact in the energy flux of the big curve or the smaller one? Looks to me like CO2 might have about one fourth the impact in Antarctica.

Now how about that tiny conductive flux that is the Rodney Dangerfield of Climate Science? In just about every thermodynamic problem I have ever encountered , neglecting conductive heat transfer was not a terribly smart idea. At the Equator, the Thermal Conductivity of air is about 0.0257W/m.K, a pretty small number. In the Antarctic, that number drops to 0.0204W/m.K at -50 degrees C. 0.0204 is 79.4% of 0.0257 according to my calculator. That seems to mean that in Antarctica, the conductive flux potential would be 79% less than at the Equator. What that flux might actually be would depend on the possible differential temperature. The radiant impact of CO2 forcing would be about a quarter of what it would be at the Equator. That would depend on the average temperature of the CO2 radiant layer and or course the emissivity between that source layer and the surface.

Because of the less than stellar quality of the temperature data in the Antarctic region, I can't even come up with a good guess yet what the approximate values are, but it seems like it may be something worth figuring out.

As I mentioned before, CO2 has a non-linear impact on the thermal conductivity of air that's peak value is at -20 degrees C.


I borrowed this image from Bob Tisdale where he posted on Watts Up With That I wonder what temperature that big dip at -60 latitude might be?

Now I realize that my time series analytical skills are poor. That I am not the most eloquent writer on the internet. But I used to get paid pretty good money to measure heat flows and energy efficiency. That dip looks like a pretty efficient application of conductive heat transfer to me. Whatta ya think?

I am Not Picking on People

The Global Warming Theory is not mine. It does have an impact on my life and life style. Since it does, I want to know more about it. I do not have a political agenda, nor a scientific axe to grind. I just don't care to pay excessive taxes for no good reason.

Think of it this way, How many anomalies can a theory not predict before it is justified to question the theory? I did not make the predictions.

First, the Antarctic is not warming as predicted by the theory. That may be due to ozone depletion. If it is due to ozone depletion, then ozone depletion would appear to be a means of controlling the rate of global warming. If that is the case, why stop using chlorofluorocarbons if they help regulate climate? In actuality, ozone depletion does not appear to be the primary cause of the Antarctic not warming as planned, the conductive impact of Carbon Dioxide and Carbonic Acid in temperatures at and below -20 degrees Centigrade appear to be the cause of the Antarctic not warming. That would be a flaw in the Global Warming theory.

Second, the basis for the Global Warming Theory as currently understood, is based on the work of James Hansen's studies of the planet Venus where He described the isothermal nature of Venus' atmosphere as being due to a runaway Greenhouse Effect. In actuality, it appears that Venus' surface to atmosphere thermal boundary is in effect iso-conductive. That is a major flaw if it is indeed true. Dr. Hansen's predictions of Global Warming are much greater than those of his contemporaries. It appears, Dr. Hansen is wrong.

Third, the radiant model for CO2 forcing is based on a simple up/down two-dimensional relationship. If conductive impacts are neglected, that model is wrong.

Don't take this in a bad way, but Global Warming Theory is inadequate for predicting future climate. Time to start over.

Dallas Tisdale

Update: Since the Climategate 2.0 Emails, I changed my mind a little, I am picking on Real Climate a touch :)

Greenhouses and the Greenhouse Effect

In a discussion on Dr. Curry's site Climate Etc. I brought up how conduction was not properly considered in the Global warming debate. This is something I have mentioned for quite a while, with quite a bit of chuckles in response. It is really a very simple concept though.

To maximize the performance of a greenhouse, you want to maximize retention of radiant energy, maximize surface energy influx and minimize thermal loss to the environment. In other words, you want to let in and retain the most heat while allowing in the most sunlight for plant growth.

In climate, two dimensional reasoning seems to dominate. Thermals or convection due to surface warming is well considered. In weather conditions were surface winds are held constant, the climate two dimensional view of the Greenhouse Effect holds true. That is not one of the things that remains constant though. Surface winds change quite regularly. With more heat retained at the surface, these winds easily transport more heat from the surface. There is nothing linear about surface heat transfer in three dimensions.

If CO2 caused a linear shift in the thermal reservoirs that corresponded with the increase in retained energy due to CO2 increase, the theory of Greenhouse Forcing would be on solid ground. The Tropopause with its near constant thermal sink and the Antarctic region, also with a near constant thermal sink, require the atmospheric greenhouse theory to be reconsidered. An increase in surface temperature with little change in the historic thermal sink capacities means more rapid loss of stored energy with changes in surface wind velocities. The Earth can
have larger temperature drops than normal because it has more energy it can release more efficiently. That in a nutshell is what is not properly considered about the conductive flux impact on climate.

Wednesday, November 9, 2011

Astromomy and Perspective

Speaking with my significant other, who by the way wonders why scientists think one way while the climate is acting another, prompted me to write this post.

Astronomers view distant stars and planets from a perspective where all they have to work with are changes in radiant energies to figure out why things on that distant object are the way they are. They then relate those observations to our planet and others using the same perspective, radiant energy or electromagnetic, magnetic and gravitation forces, to learn more about the universe.

Much like the weather conditions at the airport, that perspective doesn't provide much information to the people living on the surface of our planet. We have a surface frame of reference. Our perspective does not jive with science.

Choice of frame of reference is an important consideration in studying our planet, not just for communicating with Joe Six Packs that fund research, but to understand the differences between reality and observations through a telescope.

Thermal boundaries are the key to understanding what can be understood about climate. The ocean/atmosphere boundary is a starting point. That is a thin boundary that can release thermal energy much more rapidly that it can absorb. Radiantly, that is not even a consideration, electromagnetic energy is absorbed or reflected, done deal. Conduction, convection, latent heat, kinetic and potential energies are not negligible though.

About 10 meters below that boundary layer is another, with a different time constant or rate of absorption and release of heat energy based on different properties of the molecules and energies in that layer. 90 meters below that is another boundary layer with different properties and time constants. About 1800 meters below that is another boundary layer with another set of conditions that require another time constant. Above the surface air boundary is another layer with different properties and time constants. Above that are other layers or thermal boundaries. Horizontally there are changing pseudo-boundaries all with different time constants and properties. A fraction of these complex boundary conditions are obvious through a telescope and interpretation of the boundary interactions would easily depend on the choice of the frame of reference or the direction of the energy transfer with respect to the observation point. That is basis of thermodynamics, relativity, all science really, motion relative to perspective.

At some point, it should be obvious that the telescopic perspective may fail to explain reality. For surface dwellers, our reality of concern is what happens starting at the surface then further away if that seems to be of importance.

The true reality will be the same in all perspectives. If one interpretation does not match reality, the one not from the surface perspective is an oddity of little consequence to Joe Six Pack. Time to look out the window instead of through the telescope.

Update: While that satisfied the significant other, I thought I would expand for anyone that may drop by.

If you are looking at the Earth from say Jupiter, you could figure out that the Earth had an emissivity of 0.61, or only 61% of the energy it should be emitting from the surface is seen coming out of the top of the atmosphere. The surface at 288 degrees K is emitting 390Wm-2, 61% of that is 238Wm-2. That is the whole deal,
the surface is emitting 390 Watts per meter square of energy, but only 238 Wm-2 is getting out from the surface. The rest is scattered or reflected short wave energy, not infrared.

So from Jupiter you see something change then the surface is 1 degree K warmer, but 238 Wm-2 is still coming out. Well, you think that since the surface is now warmer it has to be emitting 5.67e-8*(289)^4 Wm-2 or 395.5Wm-2 to be one degree warmer, so it should be emitting more than 238Wm-2 at the top of the atmosphere. It is not though it is emitting 238Wm-2, so you can say that the emissivity changed to 238/395.5 or 0.602 instead of 0.61. Problem solved so you go have lunch at the alien pub and plan to mess with Mercury in the morning.

Now, if you happen to be on Mars, you might see the same thing on Earth and say, hum, it looks like 3.7Wm-2 of forcing has been added to that planet's atmosphere. It used to have a surface temperature of 288K with 390Wm-2 and an emissivity of 0.61 so it only emitted 238 at the top of the atmosphere. Here's is what must be happening on the surface. 3.7Wm-2 worth of resistance was added, that is going to cause 5.5 more Wm-2 to be emitted by the surface which will cause the 3.7Wm-2 to restrict more energy which will cause the surface to emit more energy.

Martians are complicated as Venusians know. So you use a power series with the 0.61 emissivity 3.7/(1-.61)=9.5 Wm-2 more should be emitted from the surface totaling 399.5Wm-2 which means the surface temperature is now 289.7 degrees K or it increased by 1.7 degrees due to the 3.7Wm-2 added resistance that got stuck in the atmosphere some where. Now the complex Martians can catch dinner and a movie while planning on studying Mercury the next morning.

Earthings are that complicated or bright. They dumped a bunch of junk in the air an realized it was getting a little warmer. They hire a bunch of Venusians to measure the average temperature and find that it is 0.8 degrees warmer than it used to be. Not being sure if that is a good thing or not they hire a guy with a balloon to measure the temperature as high in the air as he can go.

The Balloon guy has done this before when the silly Earthing thought that it was getting cold, you he did his job and gave the Earthlings the average temperature at 4000 meters. He also gave them a copy of the numbers he had before at 4000 meters.

The old temperature at 4000 meters was 249 degrees K and the new temperature is 250.2 degrees K. The Earthlings stiff the balloon guy because he never got a better balloon and should have gone higher than 4000 meters.

Short on cash, the Earthlings decide to make do with what they have and figured that if it warms by 0.8 on the surface it has warmed by 1.2 at 4000 meters. Scratching their heads they wander back to the Venusians to ask what the heck is going on.

The Venusian explain that the junk that had been dumped in the air restricted heat flow from the surface like a blanket keeps heat from flowing from your body while you sleep. Just like a blanket, that not only restricts flow out but would restrict flow in. Since the flow from the surface is now 394.4 Wm-2 and the flow at 4000 meters is now 222.2 Wm-2, both increased. The surface by 4.4Wm-2 and 4000 meters by 4.24 Wm-2 which means the resistance now is 4.24/4.4 or 0.963.

The Earthlings ask how can that be? It is all because of that water vapor you have which is why we came here. It is good for the complexion.

Isn't it more complicated that that? the Earthlings ask. Well, it can be if you hire a Martian, the Venusians respond. Maybe this is better, the temperature difference was 288-249 or 39 degrees with a Flux difference of 390-218 or 172. You can divide those, 39/172 = 0.226 for the old R value. Now it is (289-250.2)/(394.4-222.2)=38.8/172.2 which is your new R value of 0.225, it doesn't take much change to warm things up a bit.

The Earthlings walk away shaking theirs heads. They know they need to hire a Martian.

Since the Earthlings had bankrupted their entire balloon force, which couldn't make it to Mars anyway, they manage to get a Martian on the radio and explain the situation. The Martian mentioned that Venusians are known to lie and that the surface temperature had to be warmer than that because there was 3.7 Wm-2 restriction in the atmosphere somewhere that their most brilliant mathematicians had determine using complex series analysis which requires the temperature to be at least 289.7 degrees K and likely higher with water vapor.

The Earthlings, just a bit perturbed, confronted the Venusians with the news that they can't measure temperature worth a dare, it had to be warmer because the Martians said so.

Did they give you the water vapor bit? The Venusians asked bathing in their new sauna.

Yes they did! Growled one of the Earthlings.

Laughing, the Venusians tell the Earthing that water vapor would warm the surface more if the restriction was below the clouds or if the clouds were not as abundant. You see, the restriction is just that junk you all have been dumping in the air and it restricts more above the clouds than below the clouds. Since it is above, it warms the tops of the clouds which causes the surface warming to be less than Martians would expect.

Tuesday, November 8, 2011

The Learning Equation - Kimoto Modified

I played around with the Atmospheric R values, just to show there is more than one way to skin a cat fish. The best way I am firmly convinced is the modified Kimoto equation.

As a refresher, Kimoto is base on the simplification of the S-B relationship where dF/dT=4F/T, or the change in flux with respect to the change in temperature is equal to or "proportional" to four times the flux divided by the temperature. That is an approximation and it is not truly a linear relationship.

Depending on your choice of a frame of reference, the approximation could be 5F/T or just F/T, S-B has a forth power relationship between temperature and flux.

Using the 4F/T is equivalent to using 255K as a reference temperature. If I use the surface as my frame of reference, then the modification, a*4F/T is required where a is a variable dependent on the thermodynamic reference with respect to 255K. Only you can make a suitable to any other reference temperature just by changing the charact5eristics of a. Simple right? Evidently not so much for a lot of people, but that is how it works.

Once people get beyond that silly 4, they begin to realize you can make the equation approximately a linear relationship for small changes from any consistent Thermodynamic frame of reference.

The neat part is once that sinks in, you can see how flexible that simple equation can be. aFt + bFl + eFr, for thermal, latent and radiant. I mentioned in a previous post the F? should always be considered. We are dealing with a dynamic system so there are going to be surprises. That is the beauty of the modified equation, it can learn with you.

Fr can be split into Fr(GHG) + Fr(water,liquid) + ... F sub whatever you either have good information on or wish to learn more about. Fr can be separated into a complete line by line(LBL) spectral analysis if you wish.

The whole equation can be used for dF/dT surface, dF/dT 600mb or any reference layer you wish to study in any direction by just adjusting the coefficients for the reference temperatures of source and sink. dF/dT tropics with dF/dT sub-tropics could be used for average energy transfer between regions. It is a powerful equation, as long as you reference back to a common frame of reference.

Since boundary layers are plentiful and hard to deal with in fluid dynamics, using a boundary as a reference from in two directions could simplify resolution of changes in boundary layer flex changes with time. I haven't tried that yet, but it appears very likely.

By skipping boundary layers, ie surface to tropopause versus surface to stratopause you can compare effective emissivities to help resolve solar impacts in the atmosphere. As long as you have sufficient temperature and pressure resolution of the target layers, you can double check flux relationships between layers.

Most importantly, you can select layers where the best temperature data is available, like the 500mb mid-troposphere satellite data.

The more physical data you have, the more you can learn about F?

Matters of Scale Once Again

While there was a mild hoopla when I stated that the IPCC Down Welling Longwave Radiation Violates the Law, that died quickly because it wasn't the first time that claim had been made. Most, other than the IPCC of course, are more than aware that it is true. Many of the rejectionists, people believing that CO2 has no impact on climate, didn't find my results Earth shattering enough to be worthy of attention.

Matters of Scale are always worthy of attention. Instead of a doubling of CO2 causing 4.5 degrees warming, or 3 degrees warming or 2 degrees warming or even 1.2 degrees warming, it will be more like 0.8 to 1 degree of warming. Just for grins let's say 0.8 is the definitive value, though that is in no way certain.

Then, if solar average variation can produce a 0.1 degree change, 0.1/0.8=0.125 or 12.5% of climate variability may be due to solar all by itself. Natural vari8ability is considered to cause 10% of climate change by people believing CO2 double WILL cause 3 degrees of warming, 10% of 3 is 0.3, so natural variability now, assuming their estimate was in anyway based on sound math, 0.3/0.8=0.375 or 37.5 percent of climate variability is natural.

Solar should be considered natural, but the climate models include both solar and natural variability. Combined, 0.1 and 0.3 or 0.4 would be 50% of the climate variability.

Isn't it amazing how such a small change can make such a large difference?

Monday, November 7, 2011

A Little Climate Change Rant

I am just curious what is happening and why. I know there will never be a complete solution, but every little step in that direction is an improvement. That is why I get so frustrated when I allow myself to take the debate or myself too seriously. There are just too many misconceptions and flawed theoretical points to justify the confidence that most people have about how much and why climate is changing.

Misconception 1) DWLR warms the surface because of CO2. With the exception of minute amounts of energy tantamount to the did the tree make a noise when it fell in the woods relevance, DWLR does not directly warm the surface. A smaller difference in the net radiation outgoing versus incoming is a warming effect, i.e. the surface cools less quickly. CO2 emits in its spectrum which is for all intents and purposes blocked from a radiant impact by CO2 more than a few feet above the surface. DWLR in the atmospheric window from sources such as the water and ice in clouds have the most impact on the real net radiation at the surface. Winter clouds with light winds make the largest change in the net surface flux. Those same winter clouds in high winds have little impact, because of convection. Conduction and convection are more efficient thermal fluxes at surface temperatures and pressures.

Misconception 2) Man's activities have no effect on climate. Nonsense. The combination of land use changes, surface water changes, pollution including CO2 have impacts on climate. The question is the degree each have and in combination the total impact considering natural variability. In drought periods, the local impact is higher because the atmosphere cannot cleanse itself through precipitation. Was smog in large cities a figment of our imagination? Of course not. If we had not made changes our local environment and climate would not be the same. The question is still how much, for how long and will natural processes change the amounts.

Misconception 3) CO2 doubling will warm the Earth x degrees. Right genius, like you really know! How's that Antarctic prediction panning out for you? The climate system is too complex to know much of anything without significant uncertainty. Theory predicts, some happen, some don't, I look at what don't and wonder why it didn't. That seems to indicate that uncertainty and theory are diverging. Without CO2 the Earth would still have a climate and we would not necessarily be living on a snowball Earth. Grand predictions by some of the most famous scientists in the world have and will not pan out.

Misconception 4) CO2 is a well mixed gas. Like hell it is! CO2 concentrations change continuously with temperature, season, precipitation, cloud cover and emissions. 90% of the CO2 being added to the oceans from the atmosphere is due to tropical rainfall. The tropics have had and will continue to have little temperature change because they have their own climate controlled by the sun and the sea. Man's activity amount to squat in the tropics climate wise. The Northern hemisphere is where the impact is felt. That is where there is the greatest fluctuation of CO2 concentration and land use change to amplify its impact.

Misconception 5) CO2 is bad for the climate. Don't know about that one. More CO2 should warm the climate to a point which should stabilize the climate to a point. If there is a new ice age coming, CO2 might be nice. Then again, more CO2 may trigger that ice age. Instead of just saying it is bad, I would rather find out a little more. Adding stuff to our atmosphere is probably not a smart thing to do, but what is done has been done, we need to figure this out a little better before jumping to conclusions.

There! I feel all better, but I may have some ice cream to calm down, just in case.

The Atmospheric R Values

The Atmospheric R Value

The surface of the Earth has an average temperature of 288K which would have a corresponding thermal energy flux of 390Wm-2, via Stefan-Boltzmann using perfect black body characteristics.

The Tropopause has a temperature on average of -55C or 218.15K which by the same S-B relationship would have a perfect black body equivalent thermal flux of 131.3Wm-2. The ratio of the change in temperature to the change in flux would be the R value of the atmosphere from the surface to the tropopause or (288-218.15)/(390-131.3)= 69.85/258.7=0.27K/Wm-2. The inverse of that value would be the thermal transmittance (U-value), 3.7Wm-2.k-1. Of the atmosphere from the surface to the tropopause, assuming the temperature of the tropopause, were somewhat stable.

The ratio of the effective thermal flux at the tropopause and at the surface, 131.3/390=0.337, a unitless value, is interesting. Why? It is the approximate conductivity or the value that determines the "Thermals" portion of the surface flux. The difference in effective flux ratio or an object emitting 390Wm-2 but receiving 131.3 Wm-2 from another object would have a net flux of 390-131.3 or 258.7 Wm-2. What is more interesting is that half of that 258.7Wm-2 or 129.3Wm-2 would be half of the change in flux. When combined with the 131.3 or minimum flux at the tropopause, we get a median flux of 260.65Wm-2 which would correspond to a temperature equivalent of 260.38K degrees. That is a rather convenient value for calculating change temperatures in the lower atmosphere. It is almost like there is a balance of forces.


If we were to consider the entire atmosphere, the R value would be (288-254.5)/(390-238)=0.22 which would equivalent to a transmittance of 4.53Wm-2.K-1.

So if our world only had radiant and conductive heat transfer, it would be very simple to use R-values and transmittance to determine changes in surface temperature and/or changes at the tropopause or TOA. We have water vapor and the latent heats of vaporization and fusion to contend with though.

If our surface is cooled by say, 79 Wm-2 of latent heat transfer, just to pull a number out of my hat, the effective flux from the surface would be reduced from 390Wm-2 to 311Wm-2, with a corresponding temperature of 272K degrees. Darn the bad luck! That would change our simple R values, now wouldn’t it?

Now the total energy transferred to the tropopause is from an effective temperature of 272K not 288K. (272-218)/(390-131.3)=0.209K/Wm-2 or in terms of transmittance, 4.77Wm-2.K-1. Because of that darn shift, the apparent net flux changed to 311Wm-2 – 131.3 = 179.7Wm-2 to the tropopause and 311-238=73Wm-2 to the TOA.

But what about the flux from the surface other than the latent? Good question! The surface flux minus the latent flux, 390-79=311 experiences a different R value, 69.85/(311-131.3)=0.39 with an equivalent transmittance of 2.57Wm-2.K-1.

So what does this all mean? Well, if we neglect latent heat, a 3.7Wm-2 increase at the surface or and improvement if atmospheric insulation that reduced flux by 3.7 Wm-2 at the tropopause would produce a 1K degree increase at the surface.

However, with latent removed from the surface considered, a 2.57Wm-2 increase at the surface or an improvement in atmospheric insulation that reduced flux at the tropopause by 2.57Wm-2 would produce a change of 1K degrees at the surface.

If we want to figure this out correctly, we consider the latent shift, then a 4.77Wm-2 increase in flux at the latent shift boundary or an improvement in the insulation of the atmosphere between this boundary and the tropopause would produce a 1K degree increase in temperature at the latent shift boundary.

From the surface to the latent shift boundary, 288K to 272K, there is a 390 to 311 change in flux or a 0.205 R value with equivalent transmittance or U value of 4.94 Wm-2.K-1.

So just for grins, if we improved the insulation of the atmosphere to retain 3.7Wm-2 at the tropopause, then 3.7/4.77=0.77K degrees increase would be felt at the latent boundary which would produce 4.77/4.94=0.743 degrees at the surface.
If you want to do it easy, 3.7Wm-2 at the tropopause using the surface R value corrected for latent shift, 2.57, 2.57/3.7=0.69 at the surface or 3.7/2.57=1.43 at the tropopause if the 1 K degree increase at the surface was caused due to other impacts.

Actually, the best way would be to consider the layers. From the surface to the latent shift, 311Wm-2 pass through 288K minus 272K differential temperature which would be an R value of 0.051 (U=19.4) seen by the combination of conductive/convective and radiant flux, before the shift latent heat is again added to the flow of energy.

So we have a surface layer, 288@390 to 272@311 R=0.054, U=19.4, Latent to Tropopause layer, 272K @311Wm-2 to 218.15K @ 131.3 R=0.209 U=4.77. R values are additive, add more insulation you improve the R value, so from the surface to the Tropopause the R value would be 0.054 + 0.209 = 0.263K/Wm-2 or an equivalent U of 3.80Wm-2.K-1.

So what happened to the 79Wm-2 of latent? Nothing. Its impact is spread over both the surface latent layer and the latent tropopause layer. The sensible portion of the latent, in this example ~24-21 or 3Wm-2 between the surface and latent boundary layers, has to be back calculated. Above the latent boundary, the sensible portion of latent would also have to be back calculated as varying air turbulence would change the sensible values with changing upper layer convection conditions.

Some may have noted that that neat 258 to 260 Wm-2 range calculated above is very close to what some consider to be the value of the Down Welling Longwave Radiation DWLR. That value is above the latent boundary layer. What would that value be at the surface? The apparent net flux above the latent boundary is 179Wm-2. The average apparent net flux would then be 218Wm-2, the effective DWLR, imagine that.

Those of you that have attempted to follow my calculations may have noticed that I am using the Tropopause here instead of the 249K 600mb reference level. Why? Because the tropopause changes, -55 C is an average, but it can drop to -90C like a rock! A better reference would be something a little more stable and more easily measurable.

I will clean this up and try to make a drawing to simplify communication, but the sensible portion of latent heat does not appear to be adequately considered in climate modeling. It is one of those confusing cloud feed back issues. Not very easy to directly measure it seems. They didn’t call it latent or hidden heat for nothing.