New Computer Fund

Tuesday, December 27, 2011

Conductivity, Polar Orientation and the 4C Boundary

The coming ice age, whenever that happens, is linked to the increased conductivity of the atmosroughly 400phere and ocean due to increase CO2 and the orientation of the poles both with respect to solar irradiation and geomagnetic forcing. Big, bad and bold statement.

The polar oceans are the deep ocean thermostat. The only way that the deep oceans can gain or lose heat is by conduction. The only place on Earth for the deep oceans to lose heat is in the polar oceans. So my bold statement is not out of school, it has to be.

From a CO2 concentration of approximately 190 parts per million to roughly 400 parts per million, conductivity of the atmosphere increase by about 0.1 percent. Small potatoes on a century scale or less, but over thousands of years it adds up.

Polar orientation varies the rate of cooling of the deep ocean at the 4 C boundary because the Antarctic is a continent. When the geographical and geomagnetic poles are centered over the Antarctic land mass, less cooling is possible. Off centered orientation increase the area of the 4C boundary to atmosphere interface.

Without geomagnetic forcing, it is obvious that conductivity, polar orientation and the area of the 4C boundary of the ocean, the source of the down welling current that drives global climate, are the millennial scale drivers of glacial cooling. The question still is how much impact geomagnetic variability has on the mechanism.

Friday, December 23, 2011

Now the Complicated Part

The change in CO2 concentration is pretty easy to see. The obvious always gets the blame. To solve puzzles you have to decide if the answer is a simple joke hidden in a complex picture or if the picture is really complex. The Earth climate system is pretty complex.

In Let's Concentrate on Concentrations I tried to show that there is a small changing in concentration of CO2 that has a rapid initial radiant impact that decreases with time and a slow conductive impact that increases with time. The solar output has daily rapid changes due to rotation, annual changes due to orbit and obliquity of the axial tilt of the Earth, processional changes due to wobble around the axial tilt and finally, changes in geomagnetic intensity that vary with the solar cycle and internal dynamics of the molten core. There are layers upon layers of small changes with differing rates of change. That is a pretty complex system. Missing from the debate is the relative rotation of the complex layers, where the Bucky ball shaped model is helpful.

Imagine standing on the surface and looking up to the tropopause. The surface has a radius R and the tropopause has a radius of 2R. The surface is turning at velocity S and the tropopause is turning at a rate of S/2 for simplicity. Whatever energy the surface gets from the sun is 1/2 what the tropopause gets because of the difference in radius and area of the tropopause relative to you observation point get twice as much because it is moving half as fast. All else being equal, our climate is based on these two layers accumulating energy at different rates, different times and at different relative positions. No big deal right?

Now let's change the sun by a watt. That has 1/4 Watt impact on the surface and 1 Watt impact on the tropopause. Because of the relative velocities of these two layers, there is a large potential difference between the layers that a small change in solar would indicate at first blush.

With the ink in the aquarium experiment, CO2 has more impact on the change near the surface which approaches saturation more quickly than the tropopause. We have yet another impact that differs due to the different relative properties of the two layers.

Now I have to go fishing, but which do you think might be impacted more by a change in geomagnetism and solar magnetism, the surface or the tropopause?

The tropopause may indeed respond to fluctuations in the geomagnetic field. Plus the upper troposphere total energy relative to the surface, which would include the velocity of the jet streams, produces a complex dynamic relationship influence by natural cycles including the solar wind, magnetic orientation as well and TSI fluctuation. This would tend to reinforce Milonkovic cycle theory.

The correlation of climate with solar, including geomagnetic has bee done to death. So the next step is figuring out the magnitude of the geomagnetic impact and proving the limits of CO2 forcing yet again.

The impact of the change in atmospheric conductivity with CO2 increase appears to be a good clue for the scale of change required. Improved thermal conductivity with increased CO2 is small, but much more linear than radiant impact. Since it mainly effects the ocean surface to atmosphere interface in the southern oceans, millennial scale changes in the thermal conductivity approach a balance with other forcing. This would require multidecadal or century scale reductions in solar forcing to increase snow/ice cover to the point where albedo change can continue a cooling trend allowing more absorption and sequestering of carbon dioxide.

The last little ice age, a century scale event, should be typical of an off major cycle cooling response. Major cycles being linked to the Milankovic cycles which would produce millennial or multi-millennial cooling/warming events.

The best estimate I have to date for solar minimum impact is 0.25 Wm-2 at the surface. That would require 4 to 8 times the length of a solar half cycle, approximately 5 years, to trigger a new little ice age. Now I just have to fine tune that estimate to get a rough estimate of how much geomagnetic change may be required, which is not easy for a variety of reasons. Fun, fun, fun.

Enjoy your holidays!

Tuesday, December 20, 2011

More Let's Concentrate on Concentrations

Continuing on how a small change in a trace gas can have big impacts I want to consider what an approximate 0.03% increase in the overall concentration of CO2 in the atmosphere might do.

The lowest approximate concentration of CO2 estimated for the atmosphere in the last few hundred thousand years is about 190 parts per billion (ppm). 0.000190 CO2 molecules per ever one molecule in the atmosphere. Doubling that would be 0.000380 CO2 molecules, which is were we are now. Since the last major ice age, CO2 has doubled.

Arrhenius in his 1896 paper was attempting to prove that CO2 triggered the ice ages to warm period transitions. As I have noted before, his final table showed the temperature response over land and water for a change in CO2 from 0.67 to 1.5 from the value of his day, over 100 years ago. Based on his calculations, we are near the peak value of the Holocene. Arguably, we are near the peak value of the Holocene, about the last 12,000 years.

One thing Arrhenius did not address in his paper was how the Earth entered the ice ages. His thoughts were that CO2 was the driver, but what reduced the concentration of CO2 if it was the climate driver?

Callendar, in the 1930s also pondered the role of CO2 in climate. He determined that the impact of CO2 approached two degrees as the concentration of CO2 approach a doubling and that the impact leveled off at that point. From his day to now, CO2 is approaching a doubling and temperature response is arguably approaching 2 degrees.

After Arrhenius' paper, Angstrom commented that CO2 was approaching saturation meaning that CO2 could not produce the range of temperatures that Arrhenius had predicted. Based on Arrhenius' final table and his unpublished retraction of the range of temperatures he predicted in his 1896 paper, 1.6 (2.1) with water vapor was his estimated range was in agreement with Callendar which is in agreement with the saturation Angstrom mentioned, which is in agreement with the current temperature data, all seem to indicate that 2 degrees is roughly the maximum impact of CO2.

Ramathan, discussed in this post on the Science of Doom, also seems to agree if you carefully consider his work. In the block diagram of the CO2 warming process, he list the direct impact of CO2 on the surface for a doubling to be 1.2 degrees. Any further warming would involve interaction with water vapor. Since his initial concentration of CO2 was greater than 190ppm, approximately 280 ppm, his results are in general agreement with Arrhenius, Callendar and Angstrom.

For some reason though, current science still disagrees that CO2 has an impact on climate but that impact is limited. So how does a guy or gal at home figure out for themselves who to believe?

Why not do an experiment at home? So for a simple experiment find an aquarium that is not currently occupied by living fish. Fill the aquarium with clean water. tape a piece of newspaper on one side and the back of the aquarium and read the print. The print should be about the same size on the side and back.

Drop one drop of black ink in the water. Read the print after the ink diffuses in the water again trough the aquarium from the side and from the front.

Now put one more drop of ink in the aquarium and read again.

How much harder was the paper to read after one drop? How much harder was it to read after two drops?

If you want to make the experiment really scientific, measure 999,998 drops of clear water into the aquarium. Once you add the two drops of ink, you have 2 parts per 1,000,000. Now add 188 more drops and you have 190 parts per million roughly.

If you have a photographer in the house, you can use a light meter to measure the amount of light passing through the aquarium. As long as you have a relatively good light source and light meter, you can measure the change per drop or tens of drops and record the results for graphing both from the ends and from the front of the aquarium.

To make the experiment even more scientific, you can use red ink with a red light and green ink with a green light. Then compare one or both of those to the reading of a white light. The red and green light measured reduction in intensity will reduce more than the white light intensity. You have just modeled the atmospheric response to a change in opacity.

To really kick the experiment up to quality scientific standards, repeat the experiment with red and green inks. Add both the red and green at the same times and at the same amounts until you have a data to about 400 drops for each ink.

Have fun! After you have all the data you want, plot the data and fit a curve to the changes. What does the curve look like from one end to the other? What does the curve look like from front to back? So you can read from front to back a little better than from end to end?

The end to end represents the surface looking up to space. The front to back represents some point in the sky looking up to space. Think about it.

I will work on an experiment for change in conductivity with more CO2. It will not be as much fun because the plot is boringly straight.

Oh, should you see someone dramatically demonstrate that one drop of ink in pure water makes a big difference, think about how much difference that one drop makes if it happens to be drop number 401.

This brings me to the fun part of the impact of the change in concentrations. Optically, the increase is reaching a plateau. Constructively, it is just a slow boring increase. Each impact has effects on different feed backs with different time constants. Radiant forcing is quick. In just a decade or two there can some indication. Conductivity takes a long time to have an impact. Thousands of years maybe tens of thousands of years. Nothing like that in the climate records though :)

Sunday, December 18, 2011

Let's Concentrate on the Concentrations

I can compare apples to oranges. Apples make better crisps and pies, oranges make better cocktails and poultry glazes. A little orange zest goes a long way in an apple popover to give it a little zing with coffee in the morning. Small amounts make things happen in life. Carbon dioxide is a small thing that can have a big impact or not depending on the recipe.

I recently recommended that a couple of bloggers join up and do a post on carbon dioxide. Both are bright and knowledgeable with opinions on climate change. DeWitt Payne has been experimenting with a greenhouse effect experiment. He is meticulous in acquiring data to prove that CO2 enhances the performance of a greenhouse. He will find that it does because it does. That is what he is looking for and he will find it.

I mentioned to him before he started that real greenhouses might be a pretty good place to get ideas. He built his experiment around a box with very well insulated sides and bottom avoiding cardboard and wood which would have some moisture content that might out gas when heated, complicating his experiment. Along with his direct measurements, he has also included a little work on the radiant physics of CO2 by determining the change in the mean free path of photons absorbed and emitted by CO2. He stated that the change in the mean free path is dependent only on the change in concentration of CO2. Which is why I want him to clean his work up and publish it online. Carbon dioxide's radiant properties depending only on the concentration is his apples. My oranges is that the change in concentration of CO2 also changes the thermal conductivity of the air.

The apples and oranges thing makes a big difference in what would be a proper evaluation of the greenhouse effect. There are lots of stumbling blocks that can lead to gotcha moments, the first being what is the initial concentration.

If we were only concerned with the radiant properties, any old concentration would do. But since the end result is to compare the results of the box to the Earth, another initial concentration would be in order.

Ice core in the Antarctic have the longest record of temperature and CO2 change that we have. Those ice cores indicate that about 190 parts per million (ppm) is the lowest concentration that has been in the atmosphere for at least the past 400,000 years. So I am of the opinion that about 190 ppm is the place to start and the Antarctic is the benchmark for the comparison. Inferring that some other place on the planet does something based on what happened in the Antarctic, without knowing what really happened in the Antarctic, is not all that smart in my opinion. Let's just stick to the apples and oranges before making ambrosia.

So the concentration in the Antarctic has changed naturally from about 190ppm to about 280ppm. Currently the concentration in the Antarctic is about 370ppm due to man doing manly things (womanly would work but they don't like taking credit for screwing things up, at least in my household).

So there are a few things I would like to consider in this experiment, Temperatures, Conductivity, Radiant interaction, and Nocturnal performance. The nocturnal performance is something I think is pretty important.

The greenhouse effect is not so much about how hot things can get but about how cold they could get. With about six months of no sunlight and about six months of not very intense sunlight, it just seems logical to me to concentrate on the reason we are not freezing our asses off before we figure out how bad being warmer might be. So with an average surface temperature of about -50C or 223K, how much benefit of the greenhouse effect is the Antarctic getting?

Since this post is going to become a little complicated, bear with me while I take a break to do some cipherin' and try to get rid of most of my worst typos and unintentional misspellings.

Antarctic References?

The Antarctic is a pretty brutal environment for any kind of surveying. Good thermodynamic practice requires staring from a solid frame of reference and not making assumptions without seriously thinking about potential consequences. Averages can obscure important signals, but sometimes they are all you have to work with. When using averages, it is not a bad idea to triple check before announcing you have discovered cold fusion or catastrophic global cooling. With the polar vortex, ozone holes and every thing else, about the only constant reference temperature is space, the final frontier, at approximately 2.7 K degrees.

Atmospheric R values, that silly basic reference where temperature and energy flow have a linear relationship, might be useful. With an average surface temperature of 223K versus space, the R value would be 1.57 using the perfect black body emission of 140 Wm-2. This is just to be used as a reference for what should be happening between one point on the surface and space. Things will get complicated, so don't freak out.

If I used the new average surface temperature of 289.1K degrees in the latest Trenberth and Keihl cartoon, the R value to space would be 0.723 down from the 288K value of 0.731 in their older cartoon. The decrease in the R-value some might think indicates an increase in conductivity. That of course would be an assumption which can make an ass out of people. It is something to keep in the back of your mind though. To compare, the older K&T used 235/390 Wm-2 for 0.602 TOA emissivity and the newer 2009 cartoon uses 239/396 Wm-2 for a 0.604 TOA emissivity. Both are based on global averages so both should be considered carefully before leaping to conclusions.

There is a difference between emissivity and the R value, the emissivity only considers radiant flux, R value considers all energy flux. That is the main reason I use R values as a reference, imperfect as it may be.

Now back to concentrations.

190/1,000,000 is my choice for the base concentration of CO2. For the radiant impact we can assume the nitrogen and oxygen have little impact on emissivity. Not a bad assumption for the NOCTURNAL condition I recommended earlier. For conductivity, we would assume a base of 0.024 Wm-1.K-1 at STP for N2 and O2 which make up basically the rest of the atmosphere. CO2 has a non-linear impact on conductivity. At colder temperatures than STP, its conductivity increases to a peak value at -20 C or 253K. For the change in emissivity due to CO2, DeWitt assumes that only the change in concentration matters. I disagree, but we have to start somewhere, so for now that is the assumption.

The Properties of Carbon Dioxide list the Thermal Conductivity of CO2 as 0.086 W/m-1.K-1 at -50C or 223K degrees. Roughly the same as 293K (20C), so for initial comparison we can assume that conductive is also only dependent on concentration. At 190ppm, you have to go to five decimal places to see the CO2 impact of 0.02401 and at 280ppm that changes to a whopping 0.02402 which for most purposes would be negligible.

Obviously there is no reason to consider a change in conductivity since using the Arrhenius relationship, dF=5.35ln(280/190) = 2.1 Wm-2 of addition CO2 forcing of the 140Wm-2 emission from the surface. Assuming twice the forcing impact at the surface, 4.2Wm-2 for a surface increase in flux to 144.2 Wm-2, the surface temperature would increase to 224.5 K approximately. That new temperature and flux would change the -50C R value to -48.5 which would be 1.54 instead of 1.57, a 1.9% decrease which should be noted.

Assuming the current concentration of 370 is indicative of change from the maximum 280 ppm in the ice cores, conductivity would only increase to 0.024017 at 223K while the change in forcing, dF=5.35ln(370/190) = 3.56EWm-2, with the same assumptions as before, would produce an average surface flux of 147.1Wm-2 with an approximate average surface temperature of 225.7 degrees K. So if the Antarctic were its own little planet, nearly doubling the CO2 concentration would produce 2.7 degrees of warming. The new R value would be 1.534, a 2.3% decrease from the 190ppm value. The conductivity at this point would have increased to a whopping, 0.024023 Wm-1.K-1 which is an increase of only 0.05 percent.

This was all estimated assuming that the base conductivity was 0.024 and that only concentration mattered. The Antarctic would have warmed some where in the ballpark of 2.7 degrees with the rise in CO2 from 190ppm to 370ppm. The Vostok Ice Cores indicate about 8 degrees of temperature change from about 190 ppm to 280 ppm.

Vostok, where the ice core was drilled, has a range of temperatures from -21C to -89C. That is a fairly wide range of temperatures. In degrees K, that range is from 252K to 184K with a black body flux range from 228Wm-2 to 65Wm-2. For 7.1Wm-2 of additional forcing to produce about 8 degrees C change, the temperature would have to be lower than the -50C used in the estimates above. At the lower temperature, 184K @ 65Wm-2, and increase to 192K @ 77Wm-2 would be a 12Wm-2 increase for an 8 degree increase in temperature.

This brings to what are the proper assumptions?

The thermal conductivity of CO2 increases from 0.086 @ -50C to 0.115 @ -20C. The impact of an assumed concentration only dependent change in CO2 forcing increases as temperature decreases. Oxygen at low temperatures exhibits magnetic properties and Vostok is near the southern geomagnetic pole. At 197.5 K and 1 atmosphere, carbon dioxide can jump from gas to solid and back at will. Last but not least, at 184 K the relationship between thermal conductivity and electrical conductivity may be blurred in a magnetic field.

It would be nice if a simple change in concentration of a trace gas could answer all the questions. It doesn't quite look like it explains if the Vostek ice cores indicate global climate change or geomagnetic changes in the past, which may be partial drivers of climate change.

Since I was cruising the internet I ran across a post on Watts Up With That. I take most post with a grain of salt, this one did have an interesting graph on the Last Glacial Minimum (LGM) which I will see if it will post here as a link or photo.

Cool. The post is CO2 Sensitivity is Multi-Modal - All Bets are Off, by Ira Glickstein. I haven't checked out the post completely, but the general premiss agrees with my line of thinking. I do note that the Arctic part of the plot is not consistent with with what I would expect, which would be closer to -6 C on average with a great deal of regional fluctuation because of the Gulf Stream.

The interesting part is the southern temperate LGM temperature change which I think agrees with the expected temperature and CO2 relationship in the Vostek ice cores, not the Antarctic relationship which I pointed out above. O18 concentration in the Antarctic are unlikely to be produced locally, but transported from the southern temperate zone and southern tropical convective zone. How the ratios are amplified is the question, which I still think is due to the southern magnetic field fluctuations.

But before I wander too far astray, how about More on Let's Concentrate on Concentrations?

Saturday, December 17, 2011

Greenhouse Effect Building Block Experiment

Lots of people have set up experiments to prove or disprove the greenhouse effect. The fact is there is a greenhouse effect or atmospheric effect on surface temperature. I recommended an experiment a while back that no one has taken me up on.

It is really pretty simple. Stackable plastic cylinders with IR transparent faces and football type valves for adjusting gas composition and inserting thermal probes. If you want to spend big money, Pete's Plugs are sized for larger diameter probes and have a convenient caps to reduce leakage.

The concept is simple, build a number of test cylinders and test away. You can use three or more cylinders with various concentrations and temperature differentials. Change the order and retest. Amaze your friends! Prove to the world that the "greenhouse effect" does exist.

Actually, this is really not a bad test assembly. You can use a water bath as a source and an ice bath as a sink to test real world temperature ranges. Use LED light sources to adjust simulated solar visual spectrum forcing. Since all of the clear faces would have the same R values and refraction index, just changing the sequence of cylinders would correct for minor variations. Test with insulation and without. Be careful though, some layers might respond differently than you think :)

Oh, put your results online like a real climate scientist should.

Blog Science is Sneaking Up on the Answer

With all the confusion on the internet with too much and too biased and too inaccurate information, it is nice to see some bloggers and commentators gradually moving toward the solutions of complex problems. DeWitt Payne and Joel Shore are two of the denizens of the climate blog world that have taken on the task of proving instead of accepting theory.

In this comment on Dr Judith Curry's blog, DeWitt is highlighting the exact issue I have with the science. He determines the emissivity of carbon dioxide from the surface to the top of the atmosphere. While that is useful information, the impact of carbon dioxide on the surface is centered in the lower atmosphere. Where carbon dioxide's impact is greatest is also where conductive and convective energy transfer interact to amplify, positively or negatively, that impact.

In the lower atmosphere, the thermal coefficient of conductivity of CO2 and its mixed gas environment is on the same order of magnitude of its radiant properties. This relationship is non-linear in that conductive and radiant properties vary differently with temperature and that decreased temperature is inversely related between the two.

Hopefully, these guys can combine their work and start a technical post so we can get away from all the political BS.

Note: The emissivity of the surface of the Earth is approximately 0.996 due to water being nearly a perfect black body. The effective emissivity of the surface is approximately 0.825 based on the "benchmark" greenhouse effect. That effective emissivity should be on the order of 0.857. That difference appears to indicate the magnitude of conductive/radiant interaction at the surface, which I have not seen considered in the debate thus far.

This emissivity change requires us to Concentrate on Concentrations.

Friday, December 16, 2011

String Theory Gives Me a Headache

String theory is one approach to connect everything to everything. Nothing wrong with that, but I have trouble thinking in four dimensions much less ten. If you think string theorists are warped, you would be right. They pretty much have to be as a job requirement. They are the geek's geek.

In another post I mentioned disc models for wave propagation. Discs were used by Planck, Stefan, Boltzmann, Maxwell and most everyone in the day. That started because of a scientific challenge. I forgot the guy's name, but he challenged everyone to tell what was coming out of a hole in a furnace. A hole is pretty much a disc, so electromagnetic models are based on discs.

I don't have a problem with discs. Like a coin though, they have two sides. So a disc based radiant model should consider both sides of the coin. Once you take the disc out of the furnace wall, the infinite radiation source, you can apply conservation of energy to the disc model. That gives the flip side of the disc some meaning.

With the simple disc model, allowing for the flip side energy required, you get the classic 2 factor for the maximum interaction between discs. I went through that in the Fun with Radiant Disc Models. Which if anyone cares to decipher my scatter brain logic, kinda proves that the maximum impact of the atmospheric effect is two times the difference between the surface energy flux and the top of the atmosphere (TOA) energy flux. With roughly 390Wm-2 at the surface and 240Wm-2 at the TOA, 150Wm-2 is the difference 300Wm-2 would be the maximum.

I also mentioned that value is not necessarily a surface impact, since the atmosphere absorbs and reflects a portion of the energy. This should be obvious in my opinion, that the ratio of atmospheric to surface absorption matters and that there is a physical limit to the atmospheric effect. The Kiehl and Trenberth energy budgets screwed that up, so there is a great deal of confusion about what the atmosphere can and cannot do.

To get everyone on the same page, K&T has to go. Face it, it is just a cartoon, and while funny in a geeky kinda way, not very helpful. They are comparing combined flux impacts to just radiant energy theory and double counting some things and omitting others. That is a CF if you know what I mean. FUBAR for the guys with military experience.

So for all the wattabe redneck theoretical physicists out there, think about multi-disc radiant models and the 2 times thingy. Twice, half, about half, most, as in maybe greater than 50% and multiples of any of those, are all possible evidence in non-linear dynamic systems of changes in response to some variable. You can call that variable a forcing, a feed back or a thing-a-ma-jig.

For the title of the post, an infinitely long multi-disc model is like a string. In one dimension, it would be a dot, two and disc, three a cylinder and four a rope or string. Like I said, more that four gives me a headache, so I am going to try understanding the first three, then add the fourth. If there is enough Excedrin around, then I may ponder another. Right now, four should be enough to get a grip on the climate puzzle.

Thursday, December 15, 2011

Climate Change and Fishing Summit - Solar and Sensitivity

Dynamic systems are interesting critters. From personal experience in HVAC, systems can be described by system performance curves. Under certain conditions you can expect certain performance. A fan curve is a very simple representation of a dynamic system.

Ideally, a fan follows the rules so that more RPM means more CFM which produces a squared increase in static pressure requiring a cubed increase in break horsepower. That is the fan laws. Centrifugal airfoil fans are high efficiency fans that use airfoil shaped blades to take advantage on the same physics that allow airplanes to fly.

Not to pick on this fan design, they work great, but they tend to have unstable operating regions. Unstable operating regions are areas where very small changes in static pressure produces a much larger change in delivered airflow. In a simple installation, the fan speed or air distribution system can be adjusted for the correct flows and energy efficiency. With a little more complex system design, controlled air distribution devices, Variable Air Volume and Controlled Constant Volume devices adjust to maintain their desired adjustment changing the static pressure or the performance curve of the fan system. This can cause unwanted feedback creating strange and awe inspiring oscillations in the system where the ceiling start falling out, roofs blow off, doors and window slam shut and open. Generally, the customer is not particularly happy when this kinda stuff happens, but it can be entertaining. The point to take from that little real world example, is non-linear dynamics can make small changes cause big deals in a heartbeat.

Our climate system is a nonlinear dynamic system. As such, under certain situations, small changes, like solar variation, can make big changes.

The correlation of solar intensity to climate conditions is an indication of the impact of solar variation on climate. It is not a perfect correlation and it should not be in a non-linear dynamic system. There are numerous feedbacks that at different times would have different impacts. When a few of these feedbacks get together or synchronize, the impact is much greater than some would expect.

A good rule of thumb in a non-linear dynamic system is that any feedback can have twice its normal impact. Two feedbacks synchronized would also have twice their expected impact.

Climate Scientists seem to think that there is A climate sensitivity, which basically proves that they have no clue what they are doing. This is a sad reality, the guys in charge of saving the world from ourselves are clueless if they do not recognize the role of non-linear dynamics. Which is all too obvious, because increased CO2 changes the dynamics of the system, producing a shift in the range of sensitivity, not a specific new sensitivity. That is why the Antarctic is not warming and the Arctic is warming like a bitch.

Some call non-linear dynamic systems chaotic. Most of the more clueless climate scientists don't like the term chaotic, because it implies they placed their bets on the wrong theoretical horse. They did, if they are expecting any linearity in a climate system.

Joshua and Perception of Bias

Joshua is a frequent commenter on various climate blogs whose main concern appears to be political motivation of attitudes of the denizens of the climate change debate. On my last post he points out the some of my comments to be more politically motivated that I consider myself to be.

I consider myself to be centrist and a strong contraian. So the appearance of my political allegiance would change as required to balance the over confidence in ideology. No one is perfect, once a group starts thinking with CONFIDENCE, they have perfect solutions, that is when it is time to worry, in my view.

I started building my Bullshit detector some time ago, and the single largest bullshit flag is over confidence. The bullshit detector is long and will forever be incomplete, because bullshit seems to be a living thing - adapting to changing times.

It is pretty easy to recognize bullshit.

Out of Context rhetoric. Dr. Eric Steig has an opinion on being taken out of context. Which on closer inspection...

Kill the messenger: Wrong doing like plagiarism is bad. You can't trust the results of a report that plagiarized any thing. Everything has to be properly attributed to the source. I agree to a point. I just recognize that people do make mistakes. The measure is owning up to them.

Since my more recent interests are science related, Joshua perceives me to be conservative, possibly ultra conservative. I am just following the bullshit.

So are some others. Watch the pea may be more politically correct than follow the bullshit.

Wednesday, December 14, 2011

What the Heck is Actually Happening?

The Antarctic puzzle is a good one. Obviously, due to much lower surface temperatures, ice and near freezing sea water, radiant and conductive heat loss would be less that any other location on Earth. The Antarctic should gain heat from the tropics as the tropics warm. The increase in conductivity thanks to the non-linear properties of Carbon Dioxide would explain part of the lack of warming, by allowing more heat transfer from surface without transferring as much heat to the lower atmosphere. At the point were radiant transfer of energy becomes more significant than conductive, the atmosphere should warm more significantly. That appears to be the case even with the rather poor data. But what the heck is happening with the CO2 concentration of the atmosphere?

There can be increased absorption of CO2 by the Antarctic ocean, but it would seem that would be more variable. There is a hint that the geomagnetic field plays some role in the stabilization of both temperature and CO2 concentration, which brings me back to a chemical impact, likely CO2 and O3 getting together either on their own or in combination with some other molecule to do some magnetic/electric stimulated reaction.

In plasmas, CO2 and CH4 enhance conductivity. The same basic interaction could be happening under Antarctic conditions, but in more of a law of large numbers kinda way. So how would I approach determining the magnitude of a weak magnetic field enhancement of a chemical reaction in an environment with crap for quality data?

I hate time series analysis, but maybe the ozone concentration versus mid troposphere temperature may offer some clue for an approach? I still doubt the energy of the chemical reaction will be significant, but it may link to a better indication of thermal/non thermal flux interaction. Weird stuff is happening.

Tuesday, December 13, 2011

I Hate Politics, But it is Seems to Have Invaded Physics

There is a guy that thought some of the Climate Science big wigs were blowing smoke up his ass. Actually, there are quite a few but one, Jeff Id, helped author a paper showing who was blowing the smoke. Jeff is a little bummed out now at Business as Usual.

The Steig of Steig et. al. 2009, is Eric Steig of the University of Washington and one of the al.s is Micheal Mann of the Penn State University. You may have heard of Mann and his hockey stick, law suits and all around entertaining scientific presence. Steig and Mann are contributors to the world famous Climate Blog, The place where you go to get the finest in current climate science information from the real experts.

The latest climate pooh like,

Why Does the Stratosphere Cool While the Troposphere Warms

Opps, Sorry that post is obsolete, evidently they didn't understand things when they wrote the post. Shit happens.

What Does the Lag of CO2 Behind Temperature in Ice Cores Tell Us About Global Warming? Hmmm? "In other words, CO2 does not initiate the warming, but acts as an amplifier once they are underway. From model estimates, CO2 (along with other greenhouse gases CH4 and N2O) causes about half of the full glacial-to-interglacial warming." Okay, something unknown starts the warming from a glacial period. CO2 and all the other gases may cause half of the warming from the Last Glacial Maximum based on the models.

Antarctic Cooling, Global Warming

Okay, CO2 causes about half of the not warming? Oh, wait!

Significant Warming in the Antarctic Troposphere

Whew! There may be some warming after all.

State of the Antarctic Red or Blue?

Eric and Mike did a fine job of finding some statistically significant warming. Opps! Dr. Hu McCulloch seems a bit ticked off. Seems that Eric and Mike made a mistake, then didn't mention who found their mistake. What is that called? Nature is a pretty big scientific journal. It must be embarrassing to screw up in a prestigious scientific journal.

Oh nooooo! They did it themselves!

And now ladies and gentlemen, just it time to influence the leaders of the free world with the Fifth IPCC report,

"Box 5.1: Polar Amplification
Instrumental temperature records show that the Arctic (Bekryaev et al., 2010) and the Antarctic Peninsula [(Turner et al., 2005; Turner et al., 2009)] are experiencing the strongest warming trends (0.5°C per decade over the past 50 years), almost twice larger than for the hemispheric or global mean temperature [(IPCC, 2007)]. West Antarctic temperature also displays a warming trend of about 0.1°C per decade over the same time period (Steig et al., 2009; [Reference needed: O'Donnell et al., ?])." Ya think that they might mention that the Steig paper is flawed?

I encourage everyone to review the archives of realclimate. It is an online testimony to the hilarious world of climate science. Who knows, the Antarctic may be warming this week. At least the fate of the world is in good hands.

Friday, December 9, 2011

The Bucky Ball or Spherical Truncated Icosahedron

The Bucky Ball or Spherical Truncated Icosahedron resembles a soccer ball. When I think of the best shape for a three dimension model for Earth's thermodynamic system, that is the cat's ass. Hermann Harde used the shape for a model to determine the surface sensitivity to a doubling of CO2. He had a good idea, just didn't go far enough with it.

The only way to get a true feel for the complex interactions of environmental boundary layers is with a multilayer model. Two layers is just a drop in the bucket which is where Hermann's first shot ended. Because he only used two layers, his estimates appears to be a little on the low side, not that his number is wrong for just CO2, but it is not indicative of the overall system response to CO2.

For outgoing longwave radiation, a minimum of three layers should be required to just get in the ballpark. Using the surface as the reference, the tropopause average pressure and the average surface radiant layer, approximately 600mb, would be need to trace variations in forcing.

Using the Bucky Ball model, the surface would be Bucky sphere with radius equalt to the Earth's average radius, the tropopause a Bucky sphere with radius of the Earth plus the average height of the tropopause and the same for the 600mb boundary layer. With the truncated surfaces aligned, each facet would represent an energy surface analogous to the multi-disc model.

I am not particularly sure why this is confusing to some that have heard my suggestion. It is a simple three dimensional model that could standardized the information used for various approaches to predicting climate fluctuations. When it comes to CO2 forcing, standardization is sorely needed since most folks I know don't live in the tropopause. It is the impact at the surface that matters, in any case.

Wednesday, December 7, 2011

A Rose by any Other Name Would Still Be Non-Linear

Shakespeare was a pretty good communicator. I do a good job with my audience, fishermen. When it gets to academic communication, I need a Redneck to geek translator.

I say frame of reference is the first consideration in thermodynamics, the geeks think I am miss applying the meaning of frame of reference. WTF?

I say increasing the thermal conductivity of air changes the rate of heat flow, I get blank stares.

I say you have to recognize the boundary layers before you can formulate a solution and all I hear is chuckles.

That is the nature of a complex problem, some easy solutions appear masking the more complex relationships.

Radiant forcing is an easy solution. Different molecules respond to different wavelengths or photon energies producing an impact, obvious in the radiant transfer of energy. The not so obvious is the interactions of the change in energy flux or flow with other molecules and other methods of energy transfer. There are critical points where the interactions need to be understood.

Saturation is a critical point in radiant heat interaction. Since molecules are limited in the spectrum or range of photon energies they can absorb and emit, as they approach a saturation point, where all energy levels are occupied, reactions and interactions have to change. The changes depend on the environment where saturation occurs, temperature, pressure, composition of molecules, relative velocity of molecules and relative energies of molecules, to name a few.

Two that appear to be more important than considered are the relative velocities and energies. When a radiant spectrum range reaches saturation, it not long can change the relative velocities or energies with the addition of more absorbent molecules in that spectral window. For there to be a change in impact there has be be a change in one or more of the other variables, temperature, pressure or composition. Adding more of just the same molecule in the saturated spectrum produce interesting changes.

At perfect saturation, all the interactions are at a maximum. The average relative velocities and energies are maximized for maximum interaction with the condition it that environment. Adding more molecules in that radiant spectrum reduces the distance between the similar molecules which changes the average relative velocity and energy of the photon exchange. This increases the probability that the photons involved in the interaction can match the energy and wavelength of other molecules with a different radiant spectrum. This would cause a broadening of the overall radiant spectrum of the environment. This is the effect found useful in laser technology. Energized photons in a chamber designed to reflect photons above or below a desired wavelength bounce back and forth until their relative velocity and energies match that of another molecule in the desired emission spectrum.

For a CO2 laser, the peak energy wavelength is close to but not exactly equal a nitrogen spectral peak. With the right chamber dimensions and the right excitation, the relative velocities and energies of the two molecules match closely enough for the nitrogen molecule to emit the absorbed radiant energy at the wavelength the discharge mirror is tuned to release. Nitrogen lases.

Since most molecules have emission spectra inclusive of a number of wavelengths, without the tuning of the discharge mirror, other lasing wavelengths which can also be excited are contain in the lasing chamber. All of these other wavelengths are less energetic than the desired peak wave length. Contained in the chamber, they can either climb up, step by step toward the peak wavelength or down to a weak energy wavelength. Nitrogen is a good lasing gas, because its peak energy wavelength is relatively isolated from weaker wavelengths.

In a mixed gas environment, the number of weaker energy wavelength increases. Once the maximum energy wavelength is saturated, the probability of weaker wavelength excitation increases, effectively stepping down the peak absorption and spreading the energy more uniformly across the entire spectrum of the environment. Wavelengths, where there is an easier path to de-excite or emit the energy will allow photons to leave the local environment freeing that wavelength for excitation yet again.

If we continue to add molecules in an open environment, the molecules will diffuse uniformly through the environment within the limits of their physical properties based on temperature, pressure and the limits of gravitation and magnetic/electric field constraints. As the environment of the individual molecules change, the path of least resistance for emission changes. Spectral broadening in one environment leads to increased excitation in the weaker spectral energies increasing rate of photons finding an unoccupied transmission frequency/wavelength. Due to the force of gravity, free emission windows increase with the reduction of gravity which reduces the density of molecules and in our atmosphere, the average temperature/energy of the radiant spectrum. Increasing the concentration of one type of molecule increases the average height of the effective maximum radiant layer of that molecule's spectrum relative to the surface.

Below this average maximum energy layer, the average relative velocity and energy decreases. At lower energies, the conductive properties of the environment are enhanced. Some molecules, water vapor for example, have nearly constant thermal conductivity. Other molecules, carbon dioxide for example, have nonlinear thermal conductivities. Once radiantly saturated, the thermal conductivity of both of these molecules are enhanced. The enhancement is greatest from water vapor where the concentration of water vapor is the greatest. The enhancement of carbon dioxide is greatest where the concentration of water vapor is the least. Both molecules are in competition for the available energy.

So I find, that the change in forcing is equal to 5.35ln(Cfinal/Cinitial) a bit simplistic. I am rather surprised that so few seem to agree.

Saturday, December 3, 2011

Defining a Chaotic System?

There is a lot of talk about the need for multidisciplinary approach to Climate. To me that is an obvious requirement, starting with just the definition of what the "Climate System" entails. Playing with the opposing forces and multi-disc models, I am finding there are quite a few energy boundary layers that cannot be dismissed as negligible.

To me, the Earth system begins with the dynamic energy in the core itself. The internal dynamo generates heat, a magnetic field and as a fluid, has some impact rotation/tilt of the planet itself.

Between the core are thermal/radiant boundaries, the 4C sea water density boundary, the surface/atmosphere boundary, the Atmospheric boundary layer which consider the latent boundary to be a part, the thermal/radiant boundary layer, the tropopause boundary and so on until the thermal/non-thermal radiant boundary layer and space, in just the vertical. This should be the system envelope.

With opposing forces, the impact of any of these layers can be twice their singular value. For example, Arrhenius' greenhouse relationship appears to be an ideal relationship that would produce twice the impact. Based on the climatic conditions at the time of his paper and the region he lived, that is what he would have discovered attempting to explain the glacial/interglacial climate transitions, the ideal relationship. In the real world, only half of his expectations are realized globally, with regions near ideal conditions approaching his expectations.

This is one of the largest mistakes being made in climate science, comparing the real world to a ideal world. People lose sight of the subtle expecting the exceptional.

Thursday, December 1, 2011

More Fun with Multi-Disc Radiant Models

More Fun with Three Disc Radiant Models

With the three disc models, I was attempting to simulate what that disc would sense. That disc is limited to energy bands or lines that can interact with the material of the disc. When I use a CO2 disc, I should use only the CO2 spectrum to determine what interactions that disc would have with the energy incident to its surface. I cheated and used a percentage of the spectrum based on the difference in temperature between the source of the energy and the sink, or disc receiving the energy. That is not a major cheat, it gives a ballpark which is all I was looking for at the time.

I can adjust my little model to get more information on just CO2, with just minor tweak, by making all the discs CO2 discs. That way a warmer disc would interact with all of the energy incident on its surface from a cooler disc and two equal temperature disc would not have any energy pass through. The two equal temperature discs with perfect absorption and return is fun to think about.

Disc 1 emits X towards disc 2 which returns x/2 to disc 1 which returns x/4 to disc 2 and so on. Since disc 1 started at X on the face toward disc 2, it effective emission due to interaction with disc 2 would approach 2X. Because of conservation of energy, the other side of disc 1 would approach 0 as the opposite face approached 2X. The same thing would happen at disc 2 since it started at the same temperature. We would end up with two discs emitting and receiving 2X on the common faces and emitting 0 on the opposite faces. There is no perfection, so this would be impossible, but where the discs would try to be headed.

Without some other source of CO2 friendly photons, nothing else would happen. If we inserted another CO2 disc between the two, all the faces would approach the same value of X. If the inserted disc had initial flux values of 2X, nothing would change with the other two discs which were in equilibrium at the perfect 2X flux prior to insertion. If we inserted a disc with X flux on each face, it would approach a higher value and the outer discs would have to find a new equilibrium. But what would happen on the other faces of the original discs?

Since the new disc had energy equal to half the total system energy, the outside faces of the original discs can emit a portion of that energy, so at perfection, the opposite faces could emit X each, or half of the addition 2X per face of the new inserted disc. That poses a small problem though. With three discs we have six faces, two at X and four at 2X. That would make a total of ten X when we started with eight X, two faces at 2X for the original disc and two faces at 2X for the inserted disc. So is this model wrong?

Not really, the Xs are energy flux or energy flow, which I am not allowing to flow. This model is similar to charging a capacitor which can only hold so much energy but can appear to have a greater potential energy. If I allow some of the energy in the model to flow for a short amount of time, the X value of all the faces would decrease proportionally. Since the disc in the middle has to interact emitting its energy with the outer discs, it would reach zero energy last by some fraction of a second.

If instead of letting energy flow, I inserted another disc at 2X per face, nothing would change, just the time required for the inside discs to lose their energy. If I inserted an infinite amount of discs at 2X per face, only the time to discharge would increase because more energy is stored.

This should be an example of saturation. The CO2 discs cannot manufacture energy, only delay its transfer. We have a stack of disc swapping photons with no net energy flow.

Now let’s imagine I take a stack of discs more massive than the stack above, capable of emitting X plus something and put it up to one of the faces. The more massive stack contains more energy, so there can be flow from it to the less massive stack of discs. The adjacent faces would approach 2X plus 2 something and the outside face of the less massive stack would approach X plus half of something. If the more massive stack could maintain its energy, the outer face of the less massive stack would emit X plus half of something as long as there was energy available.

This would be steady state energy transfer at saturation. As long as the more massive stack can maintain X plus something, the face between the more and less massive stacks can maintain the 2X plus twice something, while the opposite face of the less massive stack maintains emission of X plus half of something.
Adding more discs doesn’t change anything unless the space between the faces of the discs is not at 2X plus twice something. If the energy available at the face changes then the flux interaction at the faces change by twice that change, but that only applies to the CO2 portion of the change.

For example: 3.7Wm-2 of additional CO2 forcing would produce 7.4Wm-2 net effect. If the surface temperature were 288K it would increase to 289.3K or 1.3K increase in temperature. If the same forcing were at the Antarctic surface with -20C or 253K @ 232Wm-2, then the result would be 255K or 2 degrees of warming. Since the Antarctic is not warming significantly, odds are that any forcing due to CO2 change is not near the surface.

This estimate does not attempt to determine the impact of the change in temperature due to more CO2, but instead changes in forcing that are impacted by CO2 at saturation. So it may be useful for locating the effective radiant layer of CO2 to determine the feedback of water vapor on the altitude of the effect radiant layer. An atmospheric layer that shows nearly twice the flux change of another layer could hold clues to the magnitude of the CO2 impact.

I have no clue if this is actually measurable because it is not a true net flux, but it could be an indication of location of the radiant layer because local temperature should change even though there is no real energy transfer, just a backup or flux charging, if you will.

Note: This is just a musing post. The increase in Antarctic 600mb temperature by 0.7C per decade with no apparent impact on the surface got me wondering if that oddity may be of any use. I haven't checked it out and may never. Just in case anyone would like to, it may be worth a few minutes.

I know this is a simple model but that can be a good thing. Spectral broadening and a few other things change the basic relationship. An almost two times impact though is a lot easier to spot than a, “I have no clue”, impact. Since it really doesn’t matter where in the stack the hot disc is placed, the Antarctic troposphere might have an odd hot spot or two worth checking out.

Tuesday, November 29, 2011

Thar be Monsters!

Web got is analysis posted on Dr. Curry's blog. Good job BTW with the energy well.

He hits the nail on the head with the latent heat of fusion boundary. The latent heat of evaporation should come along soon. Basically, it all boils down to available energy. The 390Wm-2 at 288K is the stumbling block. With 79Wm-2 average latent, the effective global average radiant energy is 311Wm-2 not 390Wm-2 which appears to not be properly considered. More CO2 forcing would increase the latent flux reducing the available energy by shifting the effective surface average radiant layer upward. Pretty simple really.

Oh, for the guys at RealClimate, it is not genius, it is common sense. I always thought there was a difference, maybe not.

Fishing for Confirmation, Amoung Other Things

Very nice day fishing yesterday. Jim and family from Michigan experienced a day in my back yard fishing for Spanish Mackerel mainly, with enough Snapper volunteering for dinner to make a good day. The one that got away remains unidentified, but Cobia is suspected.

Webhubtelescope, the pseudonym of one of the Curry blog denizens, is fishing for climate sensitivity and natural variability. He appears to be a good angler for those tasks. With the Antarctic ice cores he found a range for natural variability of ~0.2C. When I asked him if 0.4C was possible, he stated it was, but only at a 5% level of confidence :) He has a Schimittner Analysis here to check the estimated range of climate sensitivity.

He is now using the Greenland ice cores to do the same thing, but is yet to finish. When he is finished, he should find the range is at least twice the range he found for the Antarctic. If he had access to equatorial alpine cores, he would find a range lower than the Antarctic. If I am right, the root mean squared of the three changes would be the approximate global average, placing natural variability in the range of 0.3 to 0.4 C for the global or half, maybe a little more of the modern warming. But equatorial ice cores are pretty rare.

So Web will probably determine that the range of natural variability is 0.x and there is only a 5% that that range will be exceeded. Five percent of 100,000 is 5000, and 5% chance of that is 250. So with great confident, natural variability could only exceed the estimates determined by ice cores for about 250years. Very useful information :) Compare that to the typical frequencies of a non ergodic system and what would you get? So if there is only a 5% chance or so that we could have a climate like we are experiencing now, what are the alternatives?

Sunday, November 27, 2011

The Surface-Tropopause Connection

Previously I did a three disk model with 2T as the source and T as the sink, then inserted another disk. I was going to add another disk representing CO2, but my spreadsheet to calculate the CO2 spectrum at lower temperatures is being fought by my urge to nap frequently following typical seasonal gluttony.

Since I forced myself to stay awake for a football game that deserves a nap, I figured I would lay out a slightly different model using the average tropopause versus surface.

So T this time is 288K @ 390Wm-2 and the tropopause will be approximately 213K or - 60C at 117Wm-2, so that will be 213/288=.74T @ 0.3F. As before, this would be a night thing we don't have to deal with solar absorption in the atmosphere.

The surface disc in this case is massive compared to the Tropopause disc and the CO2 disc. If I insert the CO2 disc at some temperature between the source and sink, it will stabilize at a temperature and flux that should represent the atmospheric effect of CO2 to some degree. Note that the Tropopause that does exist is stabilized at 0.3F.

When I insert the CO2 disc, it absorbs from the source and returns to both source and sink equally. If the CO2 disc absorbs 117Wm-2 plus 3.7Wm-2, then its initial effective temperature would be 214.8K @ 120.7Wm-2, it would return half or 60.3Wm-2 which since source is massive would return the same amount. However, since the CO2 disc can absorb at a maximum, 120.7/390 or 30.9% of the broad source emission due to the returned flux, it will absorb 18.8Wm-2 returning 9.4Wm-2 the second time around. So, 120.7+60.7+18.8+2.9... so we will say it would approach 210Wm-2 for a round number. The flux from the source not absorbed by the CO2 spectrum would pass through to the Tropopause so there may be some slight change to the Tropopause temperature. The Tropopause is fairly stable, so just from grins we will assume its change is negligible for the moment.

If this happens to be the stable temperature of the CO2 disc, then it is at an equivalent temperature of 246.9K @ 210Wm-2 which is -26C. So the addition of 3.7Wm-2 worth of CO2 forcing would change the effective emissivity of some layer of the atmosphere from some temperature to approximately 246.9K degrees.

This is where the monster be! It is assumed that the 3.7Wm-2 additional forcing would produce approximately 1 to 1.2 degrees of surface warming. If this layer of the atmosphere were initially at 206.3Wm-2 (245.6K), the the add 3.7Wm-2 produced a 1.3K increase in the temperature of this layer. Since the source initially saw a sink at 117Wm-2 with a layer at 206.3Wm-2, and now sees that intermediate layer shifted to 210Wm-2, it is seeing 3.7Wm-2 which it politely returns in full from a different location. The location shifted from 245.6K to 246.9K, so what would really be the impact on the source? That would totally depend on the layers between this new CO2 effective radiant layer and the source.

The reason I use source is because CO2 can only absorb in its spectrum based on its temperature. The higher the temperature of this layer the greater the potential broadening of the spectrum of the CO2 layer. Layers below this temperature with spectra compatible with CO2, as in other CO2, overlapping water vapor and water in liquid or solid form, can return radiation from the new CO2 layer without as much impact on surface temperature.

So we end up with a variety of local sensitivities to a doubling of CO2 dependent on the radiant spectra of the atmospheric layers between the CO2 and the ultimate source of energy for the CO2 layer to return.

In the Antarctic, where there is little water of any phase in the air, CO2 is the main source of the energy to be returned. With the average temperature of both the source and the CO2 layer well below -26C, the energy available cannot produce the full 3.7Wm-2 anywhere near the surface. Also with CO2 being the primary source and return, the available spectrum is effectively filtered to the
co2 spectrum for any radiant interaction. This effect is noted by the lack of Antarctic temperature increase with increased CO2.

In the Tropics with the preponderance of water in all phases, the average altitude and therefore temperature of the radiant layers would be more greatly separated from the initial source at the surface. The additional CO2 would interact with clouds and water vapor near saturation producing upper level convection or acceleration of the rate of convection, which I believe has been noted.

Finally, in the Arctic region where surface energy is available, the increase in CO2 can interact with water and water vapor closer to the surface to have a noticeable impact on surface temperatures as advertised. Since available energy is required for CO2 enhanced return, changes in snow and ice cover greatly impact the CO2 enhancement of the "Greenhouse Effect". Which I believe is evident in the ice core and other paleo data in the northern hemisphere. This impact is highly dependent on internal variability of energy and moisture associated with natural weather circulations, increasing the amplitude of the temperature variability in these Arctic and near Arctic regions.

It should be noted that the much greater land use changes made in the past 500 to 1000 years, may have enhanced the atmospheric effect sans additional fossil fuel CO2, by reducing the expanse of snow fields which tend to sequester biological carbon. Draining swamps and marshes, building large urban areas, and in general doing the things civilized man does, would also impact the available energy and radiant gases to enhance the "Greenhouse Effect".

There are a wealth of sources that deserve mentioning for gleaned information used to write this little post. Any and everyone that ever published or thought of publishing anything that may have contributed to this post can request mention, complain that they were not mentioned or attempt to sue the crap out of me. I will try to eventually get around to providing sources. For now, I think I will just pester someone on some blog some where :)

For Lucia

Since it looks like fishing will get postponed until tomorrow, a little condensed version for Lucia may be in order.

The Kimoto Equation: dF/dT=4F/T is more elegant than you may think. The 4 is equivalent to using 255K as a reference radiant layer. You can check by using the full S-B 5.67e-8(255)^4-5.67e-8(254)^4 yields 3.74F per T. Exact no, it is an approximation, for more exact, 259K to 260K. In either case, it provides a reference flux to temperature relationship. Heavy emphasis on Reference.

From any other temperature, a variable needs to be included to balance the equation to that reference. Example: 390W/m-2 - 384.7 = 5.31Wm-2 or the change in Wm-2 from 287K to 288K. 4/5.31=0.753 or the approximate ideal emissivity between a point at 287K and a point at 259K. So 0.753F/T = 0.753(390)/288=1.02 meaning the flux at 259K point would increase by 2%. 1.02*255.14=260.2Wm-2 which has an equivalent temperature of 260.25K So a one degree change in surface temperature beginning at 288K would produce a 2% change in emissivity resulting in a 1.2 degree change at a radiant layer initially at 259K. 287K to 260K would of course produces slightly different results. It is an approximation after all.

This would appear to be the classic relationship used to predict that mid tropospheric temperatures would increase by 1.2 degrees per 1 degree change in surface temperature. If the average temperature of the radiant layer in the mid troposphere were 259K, that would be true. Unfortunately, -14.15 degrees C is limited globally as a radiant layer temperature, implying the Arrhenius relationship which appears to be based on 255K, is only valid for temperatures in the range of 255K to 260K for an average radiant layer temperature. If you referred to Arrhenius' 1896 paper, you will note that is estimates by latitude diverge from reality. Please not that in is final table, 0.67K and 1.5K are not temperatures, but changes in CO2 concentration. Assuming 280ppm was the average concentration at the time of his research, 0.67K would be approximately 187ppm and 1.5K would be approximately 420ppm. A quick test of the validity of the Arrhenius equation, since current CO2 levels are approximately 390ppm, would be to compare today's observed temperatures to Arrhenius' predicted temperatures by latitude.

Lucia, I find that the modified Kimoto equation can be a very valuable tool, but only if one realizes that Arrhenius' equation is erroneous since is does not include the require temperature dependance for the average effective radiant layer of the atmosphere in the real world. I believe Angstrom mentioned this to Arrhenius while the Galileo of Global Warming was pondering his role in the master race :)

Just in case Lucia survives the holidays and the skunk invasion.

Saturday, November 26, 2011

Fun with Physics - Toying with RealClimate

I love messing around with people. The worse sense of humor they have the more fun they are to pick on. Really, you need a sense of humor and inside jokes are often the best :)

The guys at Real Climate have very little in the way of a normal sense of humor. So I love picking on their fallen scientific Icon Arrhenius. He was much like they are, humorless, incorrect and over confident.

So when I do my posts, I prefer use things that humorless, incorrect and over-confident individuals will blow off because of some preconceived notion. Like the thermodynamic frame of reference.

The thermodynamic frame of reference is the most basic of basics in thermo. That is because it simplifies understanding of the problem be allowing a constant baseline on which other baselines can be formed. These new baselines are boundary layers that may or may not be physical boundaries. Anomalies between the base reference and the selected boundary are information providing clues to unknown phenomena or calculation brain farts. Brain farts, errors in theory or application, are often interestingly highlighted early in analysis if the proper frame of reference is selected. With an incorrectly selected frame of reference, novel thermodynamic relationships can result in Alice in Wonderland adventures which while entertaining are usually incorrect. Perpetual motion or mystery energies are typical Alice adventures.

The neat thing about the selected baselines is that they can be unitless or non physical as long as they are consistent with the initial frame of reference. Atmospheric R values which imply a delta T/delta F are unitless until defined. I can use degree K and F=Wm-2 but I could invent ThermZits per phonon as long as I am consistent. Then if in one state I have more ThermZits per phonon that another, there is a reason or clue to be explored. Since my ThermZits have some ambiguous unit, I can define them at the end of discovery or at anytime I wish.

Arrhenius, since his sense of humor was suspect, decided he needed a definitive answer. Being a know it all that assumed he was part of the master race, ambiguity was not part of his realization of the physical world. He had binomial thinking, yes and no, the real world includes maybe. He assumed that since he was such a genius, that what happened in his world, Scandinavia, had to happen in the same way in all worlds. So to him there is A climate sensitivity to CO2 due to RADIANT responses based on his world performance. This is why a real scientist should not make an Arrhenius out of himself :)

Much like Arrhenius, the RealClimate crew are dedicated to their theory which was yes and no, but now have more maybes than expected. The maybes are clues for the curious but, "within the confidence interval of the models" for those that have the utmost respect for the Arrheniuses of the world.

So by noting that Arrhenius may not have been the sharpest tack in the scientific box, that implies that the followers of Arrhenius' theory may not be as sharp as they consider themselves to be :)

So I am considering ThermZits per Furlong-grain as the new unit for the Atmospheric R values. Which will be based on reflected and unabsorbed photons instead of absorbed long wave photons.

Thursday, November 24, 2011

Weird Relationship?

While setting up a spreadsheet to compare the solar irradiance and temperatures for a few of our neighbors I hit one of those odd things that just drive you nuts.

What I was trying to do was compare the actual temperatures after adjusting for the weird 65Wm-2 oddity I noticed in the Antarctic. Since the rotation of the planets vary considerably, I was just going to use the lit face TSI minus the 65Wm-2 for the not lit face. So for the Earth, 1367.7 Wm-2 minus the 65 would be 1307.7Wm-2.

After plugging in the formula to convert surface flux to temperature, things looked pretty normal. Only I noticed that I had used albedo, the reflected energy instead of the absorbed.

1302.7*0.306=396.63 Wm-2, which I plugged into the S-B (396.63/5.67e-8)^.25 and got 289.33K as my surface temperature. Obviously, I had not expected the values to be so close to the actual. I had mistakenly used just the reflected portion of the TSI, not adjusted for geometry and not even adjusted for day/night. I was just setting up a baseline value for maximum surface temperature to compare with the other planets, but forgot to use (1-albedo) for the absorbed TSI. Not my first dumb mistake throwing a spreadsheet together. The numbers came out so close though, I thought I might use it sometimes just to mess with somebody.

Now if I had not subtracted the odd 65Wm-2, the numbers would have been 394.1Wm-2 and 291.8K degrees. Because of the geometry of a sphere, the 1367.7 would have been divided by 4 yielding 341.9Wm-2 TOA, then I could go through the standard PITA estimates, used the calculated average surface temperature and come up with nearly the same values.

Odd coincidence? I did the same thing with Mars and came up with a surface temperature of about nine K warmer than the blackbody temperature. If the oddity Flux on Mars were 145, then the temperatures would have matched using Mars albedo of 0.25. For Mercury, it doesn't work. With the moon I get an average surface temperature of 224K.

It is probably just a weird situation, but there are a few questions about just how elastic photon reflection might be. If nothing else, it may be fun for doing quick estimates of surface temperature with back of the envelope calculations.

I may go back to the Postma paper to see what he did though. He could have been close to something after all?

The Bond Albedo does take into account the geometry and there is a disagreement between the Mars references for Bond Albedo versus the visual albedo. This may be more than a coincidence. Venus of course is warmer than would be indicated by the solar irradiantion. I have theorized that is due to iso-conductivity at the surface atmosphere boundary so core energy is being more efficiently transferred to the surface and lower atmosphere. This quirk may be a quick way to do what ifs on the effective radius of the energy absorption layer.

The Amazing Case of Back Radiations

The Science of Doom is one of the better climate science blogs. The post are a good compromise between readable and technical. You should pay them a few hundred visits. One of their posts, "The Amazing Case of Back Radiation" (part I and II) is an attempt to reconcile the second law of thermodynamics with climate science. There is no need to reinvent the physics of the second law. One thing missing is that there is more to radiation than thermodynamics.

While pondering my 100K conundrum, it is becoming obvious that magnetic and electrical fields need to be considered at temperatures below 200K, with 184K range fairly interesting. Thermal and non-thermal radiant effect cross over in this region causing all sorts of weird and wondrous things to be possible.

One problem with The Science of Doom is they tend to defend instead of investigate. Science is about learning and teaching, they seem to be stuck in teaching mode, or defending ideology. That is a job for preachers and politicians, not scientists.

Just imagine what Max Planck would do with all the modern telemetry. You think he would defend his theories or cream his jeans (or tweed, or whatever he wore) and run around like a kid in a candy store? He would be going ape shit!

So I have absolutely no respect for teachers unwilling to learn. This world is our scientific oyster, let's shuck that baby!

BTW, While it is hard to determine what relationship is most significant, the Crookes Radiometer operates on a principal that can be analogous. The Tropopause and 2nd Law From and Energy Perspective is consistent with the interaction of thermal and non thermal flux impact in the Tropopause and Antarctic. On a planetary scale, the energy is significant, whether DWLR versus magnetic could be used commercially is questionable, but Piezoelectric radiant conversion may be viable. Just musing on implications.

Monday, November 21, 2011

The 100K Conumdrum

The Tropopause has peaked my curiosity for quite a while. I have thought of a few ways that the Tropopause could regulate surface temperature. The 100K boundary was not one of them.

With the three disk model, I am able to get a feel for the energy relationship between the surface and Tropopause. As I expected, the Tropopause appears to behave as a regulating energy conduit were energy is transferred rather quickly from higher to lower regions with the poles being the lower energy regions most of the time. The northern pole is unstable and since it is most isolated from the more stable southern pole, its surface has greater temperature fluctuation. That is a combined impact of variations between the tropopause and the surface that can be in or out of sequence vertically and horizontally. Land mass, the warm Gulf Stream current, and the change northern Pacific surface temperatures create a chaotic mix of temperature and pressure gradients.

The changes in snow/ice coverage which change the start and length of growing seasons produces inconstant changes in albedo and CO2 storage/release. The vast changes in land use just added to these dramatic changes which may completely or partially mask information that should be available in paleoclimate reconstructions.

The chaotic changes at the northern pole is interesting, but plays holy hell on attempting to isolate the true cause of the 100K boundary.

In the Antarctic region, evidence on the 100K boundary is most noticeable. With an average surface temperature of 224K @ 142Wm-2, the approximate flux value of the truly radiant portion of the atmospheric effect, the Antarctic is receiving the maximum benefit of the energy from the tropics on average. With the 100K @ 5.67Wm-2 I sort of expected the Stefan-Boltzmann equation to start falling apart. I may have, since the real boundary may be ~65Wm-2 or 184K degrees. The magnetic signature in the Antarctic thermal flux readings may be real or may be interference with the test instrumentation.

In the tropics and sub-tropics, deep convection pushes the 100K limit in the Tropopause. These rapid drops in temperature to near 100K are too short and too localized to be measured by the satellite data. Some balloon measurements to -95K have been recorded, but they also have issues with data quality. Based solely on the three disk radiant model, below approximately 224K @ 142Wm-2, the tropopause would be gaining energy from the lower stratosphere, which would not appear to have the thermal capacity to stabilize these events. That leaves magnetic or electrical energy from the Earth's core or solar electric as sources of the thermal energy. Energy is energy, so this is quite possible. Trying to figure out if, which and how much, though is not all that simple.

Based on the same three disk radiant ratio, 2:1, 71Wm-2 would be the average radiant layer between the Tropopause and the top of the atmosphere. This is close to the ~63Wm-2 but not enough to assume the same relationship holds when magnetic flux may be involved.

Interestingly, my minimum emissivity estimate of 0.99652825^(390-63) equals 0.32 or very close to the transmittance from the surface at 390Wm-2 average to the 100K boundary flux. Which could be absolutely nothing or an indication that temperature may not be the correct term below 100K. So it is time to research some of those goofy space radio spectra to see where to go from here.

Note: peak emission wavelength at 100K 28.977685 micron. Approximately 196K has peak wavelength of 14.7 micron, CO2 main absorption.

Non thermal versus Thermal

On the other front:

Since I am stuck for the moment:

This is an older post on Real Climate about the Arrhenius-Angstrom issue. Angstrom basically said that near the surface, CO2 is virtually saturated, i.e. a doubling of halving of CO2 would have little impact on surface temperature. Pierrehumbert and the author Spencer Weart note that convection moves surface temperature up in the atmosphere where CO2, with less competition with itself and water vapor could enhance the greenhouse effect. While that is true, the impact of the enhancement cannot easily be transferred to the surface due to the near saturation of CO2 noted by Angstrom. So Angstrom was right.

Weart and Pierrehumbert miss two of the main issues highlighted by Angstrom. The first that near saturation at the surface limits the radiant impact of CO2 at the surface. The second is more subtle, that the temperature of the CO2 limits the energy it can transfer to the layers of the atmosphere below its level of radiation.

Another issue not considered is the impact of CO2 on conduction of the atmospheric gases. Based on the conundrum above, there is also the possibility of the impact of the Earth's magnetic field on the properties of CO2 at low temperatures, approximate 100K degrees.

It should be simple to adjust models to compensate for the effects of near saturation and emission temperature. There does not appear to be much adjustment made for the conductive impact and no adjustment made for the magnetic impact. The Antarctic performance appears to imply that neither conductive nor magnetic influences should be considered negligible.

Update: When I get stuck I do tend to wander much more than normal which is totally abby normal for anyone to make sense out of, so bear with me.

The 100K boundary is ~5.67Wm-2 which is in the neighborhood of where I expected things to get fun. Things are getting strange at a flux of ~ 65Wm-2 or a temperature of roughly 184K degrees. Just about any scientist knows about the magnetic properties of oxygen and there is quite a bit of research on electric and magnetic interaction with O2 at low temperatures. That would mean that if this ~65 Wm-2 thing is a real phenomenon, there must be a pretty unusual mechanism. Thermal and non thermal flux cross over in this range, but astronomers are pretty good at telling the difference looking at distant objects. So things are getting back into sci-fi mode.

My probable best hypothesis is the Tropopause sink with a magnetic flux impact on the radiant transfer. That may make me totally certifiably a nut job, but it seems like a possibility given the odd circumstances. The ~65 is a rough match of the temperature/flux differences I would estimate for Venus and Mars. There are temperature inversions of -44C with surface temperatures of -74C in the Antarctic, which would be balanced by ~ 65Wm-2. The flux readings for the Antarctic are off by ~65Wm-2 in some areas which appear to show the southern magnetic flux field. Just no mechanism to support such a hair brained hypothesis. The center wavelength of 28.98, is just enough off from things to be a problem, but what if Ozone and CO2 in some form can hook up? The Ozone hole is not really a hole, just a significant reduction in concentration. The Arctic has its own hole forming at a different temperature and magnetic orientation. It may be crazy enough to pursue.

The Electromagnetic Fields and Climate Change

This was not something I really expected, but hey, it seems to have some substance. The radiant transfer of energy from a body to another body has two main limits, the energy of the bodies and the background energy of space. I would have thought that background energy would be related to the temperature of space, but it appears to be the energies in the low end of the electromagnetic spectrum.

While there is a lot of work to verify this situation, around 60 Wm-2 of energy flux we hit an unexpected to me thermal barrier. A spectrum with an average energy of 60Wm-2 is in the radio range. Based on the Kelvin temperature scale, a corresponding temperature of about 100K. Neat!

If true, this explains the oddities at the south pole. One being that the measured flux readings appeared to show the signature of the southern magnetic field.

Liquid oxygen has a temperature of around 90K and has magnetic properties. That would be a bit of a verification!

The minimum temperature I have seen for the tropopause with deep convective pluses is -95K lends more support to my Tropopause heat sink theory. See, the tropopause can get no colder than the energy above and below will allow. Stratospheric energy seems to impose this limit, 238Wm-2 divide by 2 is ~120Wm-2 or an equivalent temperature of 214K. This is getting kind of exciting :)

Sunday, November 20, 2011

It is so Complicated!!

Not really. It is funny though reading different people explaining why we should or shouldn't do this or that. So I thought I would add my silly solution of the week, lose weight!

The average human with a temperature of 300K emits approximately 464.5Wm-2. If the average human has a surface area of one meter, that would be 464.2 Watts per second per human. With about 7,000,000,000 humans on the planet that is 3.25e12W per second human contributed warming. The Earth has a surface area of 5.1e14m^2, so people emit, 0.00638Wm-2 from the surface of the Earth, mainly in the northern hemisphere. Obviously, obesity is contributing to global warming! So forget the compact fluorescent light bulbs, lose weight! Less human surface area means less global warming.

A Quick Summary to Date

As I have mentioned before, Greenhouse Theory is not my main thing. All the data gathered for studying the Greenhouse Effect may hold a lot more information than just the radiant impacts of CO2 and the rest of the little photon grabbers. So here is a quick summary for the guys really concerned with the Greenhouse Effect:

CO2 has a radiant impact at a given temperature of one half of the area of its absorption spectrum relative to the emission spectrum of the layers below that temperature. So if you integrate the area of the CO2 spectrum and you know the spectrum it is exposed to you can determine the impact. CO2 is basically a space blanket with a lot of holes in it.

If energy is to be conserved, the radiant impact of the atmosphere, all molecules, cannot exceed, twice the temperature of the surface or the energy input to the surface, which ever comes first. Since the surface average emission is 390Wm-2 @ 288K and the TOA emission is ~ 238Wm-2, 142Wm-2 is the energy limit of the warming (twice that value as both surface warming and Tropopause cooling balance, See next point). Since not all of that energy is emitted from the surface, only the portion emitted from the surface can impact the surface.

Since surface and atmospheric warming has to be balanced by equal cooling, the Tropopause temperature has to be depressed just as far below the average temperature of the atmosphere as the surface is raised above the average temperature of the atmosphere. One should note that the tropopause temperature is fairly stable. That is because horizontal radiant energy transfer within the Tropopause layer stabilizes the temperature to near the average cooling impact of the atmospheric effect.

Spectral broadening at higher temperatures due to collisional energy transfer is a conductive impact with radiant importance. Radiant impact and conductive impact cannot be considered separately.

Latent energy from the surface shifts the apparent temperature of the surface. Since the conductive impact below the radiant shift layer is not equal to the radiant impact above the latent shift layer, it complicates accurate calculation of the Greenhouse impact at the surface.

Finally, the impact of the Greenhouse Effect varies regionally due to all these conditions plus available surface energy, i.e. albedo and global circulation of internal energy. Have fun trying to figure that out :)

Saturday, November 19, 2011

The Tropopause and the 2nd Law From Energy Perspective.

In the Tropopause and Second law of Thermodynamics Paradox I discussed a simple dilemma using just temperatures between thermodynamic disks. Radiant energy isn’t as simple as temperature. There is a fourth power relationship between energy flux and temperature. The fourth power relationship creates an envelope of energy related to temperature of the radiant spectrum of an object at a given temperature with a given concentration of elements. Every element has a unique radiant spectrum and elements in combination extend that spectrum. A perfect black body would have a uniform spectrum with no gaps or missing spectral lines.

The Stefan-Boltzmann law, E=sigma(T^4) is the formula to determine the ideal energy intensity of an object at a given temperature. That energy is based on one side of a flat disk, not a sphere. In order to use on a sphere, the geometry of the sphere has to be considered which produces very accurate results for the observable face. The opposite face of the object has to be assumed to follow the same rules, which is perfectly fine if the object is sufficiently massive.

With gray bodies, less than perfect black bodies, those assumptions have to be applied which appear adequate, again if the object is sufficiently massive. Gray bodies with gaseous atmospheres do not always behave completely as expected. How much they deviate from expectations is a major issue.

As with the temperature only example linked above, the same problem can be posed with energy flux instead of temperature with the absorption/emission spectrum of the object considered.

If we let T be 150K, then the flux values for the problem using the full Stefan-Boltzmann constant, σ = 5.670373(21)×10−8 W m−2 K−4, yields F(2T)=459.3Wm-2, F(T)=28.7W/m-2. So in order to have 1/2 the temperature, the cooler disk would only have to absorb 28.7/459.3 or 0.22% of the radiation emitted by the warmer object.

When we place another object between the two, we determined its final temperature would be approximately 3/2T which would be 225K @ 145.3Wm-2. So the simple temperature ratio used is vastly different when using energy flux.

Since the warmer body for some instant in time will continue to emit at 459.3Wm-2 and the center disk only absorbs 145.3, 313Wm-2 would pass through the newly inserted object outside its absorption envelope. The 145.3Wm-2 the warmer object receives from the center disk would be toward the lower energy range of its emission spectrum. If the energy received is within the envelope of the warmer body spectrum, that energy would be absorbed. If not, that energy would pass through the warmer body just as energy outside of the emission spectrum of the cooler disks would pass through.

Since it is likely that the warmer body spectral envelope includes the envelope of the cooler body, we can assume that all the energy is absorbed. In such a case, the warmer body would return approximately one half of that energy, 72.6Wm-2 would be returned to the center disk. It would be incorrect to assume that all of this energy would be absorbed, since the spectral envelop of the center disk is much smaller than the warmer disk.

If the warmer body emits the newly absorbed energy uniformly across its spectrum, the center disk would absorb on the order of 72.6/459.3 or 15.8 percent of that radiation, 13.5Wm-2. We would have a series based on 15.8 percent absorbed which is based on the original spectral envelope of the center disk.

72.6+13.5+1.9+0.36+0.11… or approximately 90Wm-2. The envelope of the center disk would of course increase with temperature changing this relationship, but the law of conservation of energy has to also be considered.

The total energy of the warmer body cannot increase more than the decrease in energy of the other two disks. So the opposite face of the warmer body must decrease as the common face increases.

That changes our series by 50 percent or instead of 90Wm-2 to 45Wm-2.
So the warmer disk would increase to approximately 504.3Wm-2 while its opposite face decreased 45Wm-2 to 414.3Wm-2. The total apparent energy of that object would become 918.6Wm-2 which would have an average apparent temperature of 459.3Wm-2 or 300K degrees. The apparent temperature of the common face would be 307.1 degrees K

What I propose is that is the condition of the atmospheric effect prior to the addition of more carbon dioxide at a location on the surface with an apparent temperature of 307 degrees. Or that the radiant portion of the atmospheric effect is approximate 7K degrees at regions with surface temperature of 307K degrees and that the impact would vary with the surface temperature and the effective radiant layer of the atmosphere. In another post I will add a new disk representing carbon dioxide.

Note: This is a little different that Kirchhoff's law. With longwave radiation there is essentially no reflection. What is not absorbed passes through the disk or wall, and of what is absorbed, 50% is returned instead of reflected. The amount returned is increased by the number of disks returning 50% of their absorbed which either passes through or is absorbed returning 50% and the amount absorbed is dependent of the absorption spectrum, i.e. temperature and/or composition of the disk.

The Tropopause and the Second Law of Thermodynamics Paradox

The Tropopause and the Second Law of Thermodynamics Paradox

Most of the relationships developed for radiant physics are based on disks. Flat surfaces with one side analyzed. This started with an original problem of an oven with an observation hole. The inside of the oven was at a constant temperature and the observer determined the characteristics of the radiation leaving the hole. This resulted in a vast improvement in our knowledge of electromagnetic radiation.
Today, we are dealing with radiant physics problem using that same hole in an oven. Not that that data is wrong in any way, we just may not have the same oven or the same hole. When two objects in ideal space are used for example, typically we use spheres. Then relate the hole or disk to a sphere, use the classic relationships and call it good. That may not provide all the accuracy needed.

Consider two disks in a vacuum:

Disk one is at some temperature and disk two is at another. The disks have a common face, or one is facing the other, and an opposite face, facing away from the other. The warmer disk is at a temperature 2T and the cooler at a temperature T as observed from between the two disks.

We insert a disk between these two. The new disk is at absolute zero with an area equal the two original disks and less mass than either of the original disks. Now we observe what happens.

Since the new disk is at absolute zero, initially it receives energy proportional to 2T on one side and T on the other. Since it initially has no energy, i.e. is at absolute zero, the new disk does not emit any radiation.

As the new disk slowly gains energy, it begins to return energy to the original disks in some proportion to the energy it acquires. Eventually, the new disk will reach equilibrium with the original disks.

Since the new disk does not have an energy source, it cannot add energy to the system, only impact the distribution of energy between the two original disks. Since the 2T disk has more energy, the impact on that disk will not be the same as the impact on the lower energy disk originally at temperature T.

The new disk will absorb at 2T and return half or T to that disk. It will absorb at T and return half or T/2 to that disk. The new disk will pass on the opposite face, T to the cooler disk and T/2 to the warmer disk. The energy passing through the new disk is now 3/2T, its apparent temperature.

The original warmer disk sees a warmer disk than before and the cooler disk sees a disk cooler than before. So the original disks apparent temperature will have to adjust to this new condition.

If the total energy of the system is conserved, then the cooler disk which is receiving 1/2T worth less energy , will reduce by 1/2 T. So it will emit 1/4 T less on both faces. The warmer disk which is being returned 1/2T more. would emit 1/4 T more on each face.

Now we have a paradox. While energy is conserved, the opposite face of the cooler disk may not be able to emit less energy or the opposite face of the warmer disk may not be able to emit more energy. There may exist constraints outside of the three disk system.

If the system of disks are in a vacuum at absolute zero, the cooler face of the cooler disk was at equilibrium with zero energy. It cannot absorb or gain energy from a source with no energy. That means than the warmer disk cannot release more energy, since if the cooler disk cannot emit less, energy is not conserved.
In order to conserve energy the opposite face of the warmer disk must emit less energy. That implies that the warmer disks opposite face can absorb more energy or the apparent temperature of all disk face would have to decrease proportionally to conserve the energy of the system.

An interesting situation, it is kinda like the convective rule of radiant energy.