New Computer Fund

Monday, January 30, 2012

Data Leap Frogging


Remote Sensing Systems (RSS) is one of the groups that use the Mircowave Sounding Units (MSU) on board satellites to develop atmospheric temperature products. While the MSU data has its issues, in general it is the best source of atmospheric temperature information we have. In order to build their data sets, the have to weigh layers of the atmosphere using filters on the data. The chart above show the weighing for the four different products.

Starting at the surface, I will call the products A,B,C and D. As you can see, there is considerable overlap between A and B, B and C, C and D, which would tend to suppress the information when the adjacent layers are compared. Leap Frogging, would compare A to C, B to D and A to D, to reduce the signal suppression. Not a very complicated thing to do, right?

Then B could be compared to A-C, A-D and B-D to determine the best approximation for a direct comparison to A or C. While a little complicated, it would improve the confidence in the values for each layer.

To take advantage of this leaf frogging, I have recommend that a Bucky Ball shaped model of concentric spheres be use. Again, a little complicated, but by using the center of the Earth as a distance vector and Bucky shaped areas, much like the sections of a soccer ball as target areas, a three dimensional model of the common thermodynamic boundary layers of the Earth climate system can be made to better determine the energy flows between layers and sections.

Using the modified Kimoto equation adapted for what I call learning mode, the fungible characteristic of energy flux can be used to more accurately track the various energy flows and energy lost to heat in transit from any point in the model.

Constructed in this manner, the model would be comparable to the Relativistic Heat Equations as the relative velocity of energy flow could be roughly determined between adjacent thermodynamic boundary layers. A simple concept, not so simple to develop fully, but it should be capable of continuous modification and more of the relationships between boundary layers is learned. Kinda like complex modeling for dummies.

Sunday, January 29, 2012

Once More into the Kimoto Equation Breach

Someone brought up the subject of Critical Thinking. Some BS about starting with a blank page to remove your biases. To me, thinking is critical, so build a BS detector. So you can figure out if someone is BSing you or if your are BSing yourself. So that brought me right back to the learning equation, Kimoto with a little modification.

The basic equation is dF/dt = 4F/T, where F is energy flux in Wm-2 in this case and T is in degrees Kelvin. It is based on the Stefan-Bolztman relationship for radiant energy from a black body. Energy is supposedly fungible, meaning it can take various forms. So I modified it dF/dT= aF1+bF2+..zFn all over T. That bugs people.

So let's try dF(T)/dT=aF1+bF2+...zFn, multiplying both sides by that pesky T.

In order for this to work, the coefficients a,b,...z, would have to be relationships that make the different energy fluxes proportional to one another. Zoom! That goes right over the head of average reader. What throws most folks off is the 4th power relationship of flux to temperature in the S-B relationship. What temperature? It is on the other side of the equation, we can deal with that when and if we want to. Right now we just have photons, phonons or electrons each doing their own thing. If we are balancing energy, screw the temperature, let's balance energy. Temperature is easy, let the students deal with that later.

So let's do aF1+bF2+...zFn=Fout+S, where Fout is the layer we know what energy is leaving and S is the energy lost along the way, entropy. Kinda looks to me that might simplify things. Some of the energy fluxes might flow more efficiently, so S for that sucker could be small with respect to the others. Some of the energy fluxes might be a bear to figure out, so S for that one may be ?, a question mark. So if we solve for what we know best, the small ones and the question mark ones would be uncertainty until we figure them out.

By setting up the right model, doing a little interpolation between layers, we may be able to hone in on some of those unknowns. Lets' call this redneck logic, how to solve a problem with more unknowns that you care to mess with at one time.

I will leave this post alone for folks to ponder. Put your thinking caps on, this is not your average method of doing bidness :) Next will be layer leap frogging. A tricky way to get rid of most of the satellite measurement filter overlap. I hear that is a little bit of an issue.

Non-Linear Dynamics, a Redneck Perspective

Non-linear dynamics baffle most sane people. They cause things that don't seem possible to happen. If you have ever had a tire on your car start to separate, you have had some experience with the joys of non-linear dynamics.

Initially, everything feels just fine. Then you notice a slight difference at some speed with maybe an extra passenger in the car or less fuel in the tank. You speed up or slow down it goes away. Then you start to take the car to the mechanic, but it drives just fine, so you put off the visit.

As you are driving down the road to visit grandma out of town, the vibration comes back. You make it to grandma's and check everything out. Nothing looks out of place. After having dinner and some of Granny's apple crisps, you take to car to a mechanic. Without even looking at the car he tells you how much a new set of tires cost. You blow him off, because there is no way that Gomer possibly knows what is wrong with your Beemer.

So you head back to Granny's with the car riding just fine. Heading home in a light rain, the car drives just perfectly. Gomer is a dumbbutt obviously since there is nothing wrong with the car.

The next afternoon, in scorching heat, you hurry back to catch the first half of the big game and your tire blows out on the freeway. Just bad luck that you miss the game waiting on AAA.

The initial conditions in each case caused different responses. Load, speed and temperature change the frequency of the oscillation of the imbalance in the tire.

A tire is a pretty simple example. Climate is not so simple an example but shares similar traits. Weather has pseudo-cyclic oscillations. Those oscillations have an average peak to peak value. A twice peak to peak value tends to be a record of some kind. While records are special, they happen. So a twice peak to peak value in a non-linear dynamic system is rare, but expected. They are actually signatures of non-linear dynamic systems. So if someone believes that a twice peak to peak change is abnormal, they obviously think they are dealing with a linear system. In that case, they need to listen to Gomer :)

Friday, January 27, 2012

History of Modern Agriculture and Climate Change

I have been messing around with the physics side of the Global warming issue for a while and things don't add up the way I would think. CO2 has a radiant and conductive impact on the physics of the atmosphere, but neither can manufacture energy on their own. So I started looking into land use changes. Land use could allow more energy to be absorbed and less reflected.


The chart above is on the tree ring reconstruction by Jacoby et. al 2006 from samples taken from the Taymir or Taymyr Peninsula in Siberia. Jacoby et. al consider this to be a temperature reconstruction. Temperature plays some role in tree growth, but general tree rings are an indication of growing conditions. Jacoby et. al mention that after 1970, the temperature stopped indicating temperature, so the series ends in 1970. This is the divergence problem that is pretty well known, I would have preferred that they left the final years of data in place, but I am too lazy to go tracking that down.

I plotted only the raw reconstruction data, no smoothing. As I have been known to say, most of the data is in the exceptions or anomalies, not in the smoothed data. Here you can see it in its noisy glory.

I will try to get a cleaner version to show that beginning in approximately 1820, the slope of the noise increases in the positive direction considerably more than the overall slope which is slightly positive. The reason I point this out is the invention of the steel plow.

The 19th century was the dawn of the agricultural revolution. The steel plow, the wheat harvester, the cotton gin were all invented in the late 18th and early 19th centuries. The agricultural revolution kicked off the industrial revolution.

The Taymyr tree ring reconstruction may indicate that agricultural expansion started the warming we now notice with much better instrumentation. Quite a bit of uncertainty, but enough correlation to be interesting. The biggest indication in the tree ring reconstruction is the decrease in the duration of extremes. Temperature will fluctuate, but cleared farmland is valuable and farmers will find a way to get as much in production as early as possible. So in spite of all of the noise in the data, climate appears to be growing more stable. That is an interesting contradiction of the Global warming theory, if true.

This is a very busy chart of the Taymyr data:



There are three regressions added. The light blue, hard to see since there is little change, is the pre-agricultural linear regression. The red is the agricultural regression, starting at approximately 1814. The green is a power regression of the nasty looking green noise.

The nasty looking green noise is double the annual change. It is doubled to make it stand out. All it is, is the year plus 1, minus the year, doubled. The slope decreases slightly over time. I will try to locate the signal processing brain cells I misplaced during the late 70s and plau with some nifty filters. For now there is a 25 year average buried in the business of the chart.

I may take Tonybs CET and attempt a splice of instrumental to Taymyr tree rings :)

Wednesday, January 25, 2012

Merchants of Over Confidence

Doubt is free, everyone has their share or should, or they are what might be called gullible. Certainty is a rare commodity, there are merchants for that.

The book Merchants of Doubt sells the certainty that there should be no doubt. Anyone that points out that doubt exists, is an evil merchant trying to part you from common wisdom.

One of the greatest examples of gullibility is the video by Penn and Teller call Di-Hydrogen Monoxide (DHM). DHM,is two hydrogen molecules and one oxygen, H2O or water. P&T slipped a young woman into an environmentalists get together with a fake petition to ban water, DHM. The young woman listed plenty of facts about water, only giving it an evil twist. Water is a solvent, water causes profuse sweating, water causes death etc. The gullible environmentalists largely signed the petition without question. They obviously had no clue what Di-Hydrogen Monoxide was, it just sounded bad, so it should be banned. They lacked doubt, so the bought the certainty that something that sounded bad was.

This was just a fun skit by P&T to show how gullible the average attendee of an environmentalist rally happens to have been, possibly still may be. If you happen to be an environmentalists, you "Know" that you are not that gullible.

If I told you that due to one thing, cancer deaths have increased 500 times, not being gullible, you would know that is bad. That one thing exists, it is called prosperity. Because of man's prosperity, he has free time. Man doesn't have to spend most of his life trying to live. He can muse on what causes this or that. Learn things about this and that to make better this and thats. Very few people die from cancer when they are dying from starvation, exposure, diseases wild animal attacks, wild human attacks or accidents. You had to be prosperous to live long enough to die of cancer. You wouldn't ban prosperity to fight cancer would you? Opps!

Global warming is a lot like cancer. Because man has become prosperous, he can worry about what he is doing to the climate. Nothing at all wrong with that. It is good to not crap where you live, unless of course you sanitarily deal with your crap. Man used to just dump his crap in the streets, once he progressed to having streets, that is. Then he progressed to dumping his crap in the rivers and oceans because there was too much crap in the streets. Back then, the prosperity to have streets caused death by crap. Dumping in the rivers cause other things and people to die by crap. Man's crap caused death. So did man ban crapping? No, he developed better ways to deal with crap, progress allowed by prosperity, lead to better ways of dealing with crap.

Since you are an intelligent environmentalist, you know that crap is dangerous. So do you find better ways to deal with crap? Maybe you think it is better to have other people deal with the crap? Maybe you think that we should just ban crap? Crap is bad, let's ban crap.

Or do you consider dealing with our crap? You love the planet and mother nature, so we either need to deal with our crap or ban it. Sometimes dealing with crap means you have to live with the crap until you figure out how to deal with the crap. What may sound like a great way to deal with crap, like dumping your crap in the streets, may work for a while, but sooner or later, that crap will get you. You need to find a better way to deal with crap. The best way to deal with the crap is at the source of the crap. Wars have fallen a little out of favor, so that option is limited. It is a problem solver though.

As an environmentalist, you may want to be a humanitarian also. You you want to save the planet and as many crappers as possible. In order to crap, the crappers need food and food requires land to be grown upon. Growing food to feed billions can be done by hand and organically. It is a little bit easier to use machinery, fertilizers and pesticides, but it can be done without any of those, because there is plenty of crap for fertilizer and plenty of crappers to spread the crap around. Bugs would be a bit of a problem, but with enough land, organic bug repellants can help.

All we need is enough land and we can feed all the crappers. An acre per crapper should be more than enough. So only about 7 to 10 billion acres will be needed to feed all the healthy, longer living crappers which will eventually die of some kind of cancer or other crap, the cause of which will be banned later.

Since there are about 250 acres per kilometer squared, it would only take about 40 million kilometers squared. The surface area of the Earth is 510 million kilometers squared, so about 8 percent of the surface of the Earth is all we need. Right now man is using about 48 million kilometers square as agricultural land, that includes pastures and other non-Earth friendly low food production uses. In 1950 there were about 2.5 billion crappers. Now there is about 7 billion crappers. Someone not intelligent enough to be a scientist might think the increase in the acreage of crapper food production might have contributed to global warming. That would be foolish. Global warming is caused by CO2 produced by fossil fuel use. Just because there is a correlation between land use by humans and warming, does not mean that is THE cause of global warming. THE cause is CO2 mainly from coal which is the root of all fossil fuel evil because it requires turning beautiful mountains into eyesores, not beautiful forests into beautiful farm lands.

Now, just because coal is not as dirty as it was before 1950, doesn't mean it is clean. Coal can never be clean and it still will destroy all of the beautiful mountains and has been proven to cause the death of crappers. Since progress in dealing with our crap halted sometime around 1950, the only things that can help are things that were not around in the 1950s. Well, things that were not around in the 1950s and produced in some other crapper's back yard.

Humm? Seems like this post is full of crap. Luckily, the environmentalists have their crap together so only the things that caused the prosperity and the crap that came with it,are to blame. After all, if it wasn't for prosperity, there would be no cancer.

Tuesday, January 24, 2012

Climate Change, Passive Smoke and Merchants

Passive smoke risk is often used in the climate change debate to illustrate how the tobacco industry used false representations of "doubt" in attempt to save their industry and billions of dollars at risk. This leads to how the Big Oil companies must be doing the same thing since they are trying to preserve their businesses and avoid trillions of dollars of risk. Some say the comparisons are unfounded, some live by them. What's an outsider to do?

First, recognize that the comparisons have a great deal of validity. Valid comparisons cut both ways, though. Let's see how.

Passive smoke studies were primarily based on a male head of household and female non-smoking spouse with children. that was a fairly average family structure at the time the surveys began, dad smoked, mom and the kids didn't. The studies found that the non-smoking family members had a greater risk than the average non-smoking population. Various studies determined varying degrees of risk associated with passive smoke in the home. Similar work place studies were done, all indicated a general increase in lung cancer risk plus some other varieties of cancers. Passive smoke is definitely linked to health issues in non-smokers. For the sake of simplicity, I am only going to use the lung cancer risk.

Passive smoking increases the risk of lung cancer in non-smokers by 30% is the standard argument of the no smoking crowd. It is perfectly correct, but what does it really mean?

Non-smoking individuals with prolonged exposure to passive smoke have a 30% great risk than the general non-smoking population, which has a 1.3% risk of developing lung cancer over their lifetime, all things remaining equal. So non-smokers in a smoking household have a 30% greater risk than the general population, 1.3%, meaning they have a 1.69% risk of lung cancer. 30% greater than 1.3 is 1.69. What if all things don't remain equal?

If a concerned smoker in a non-smoking household changed their smoking habits around the non-smokers, the risk changes. Installing a special smoke filtering system reduces risk. Improving the ventilation of the smoking spaces or the non-smoking spaces reduces risk. Knowing that there is an increased risk, the smoker can reduce the risk for his family. Life is not static, all things rarely remain equal.

In climate change, a doubling of CO2 will cause about 1.5 degrees of warming, all things remaining equal. If water vapor is a positive feed back to the CO2 increase, the doubling of CO2 can cause 4.5 degrees of warming. If other factors provide positive feed back to increase of CO2, more warming is possible, all other things remaining equal.

So the premise of both arguments is the same, something may happen if nothing changes. There is a fancy term for this type of reasoning, I am not fancy, so I use the term "bullshit" reasoning.

Having worked in the Heating, Ventilation and Air Conditioning industry, I know how things can be changed to reduce risk of harm from tobacco smoke and dozens of other hazardous indoor air pollutants. Prior to the bans on workplace smoking, the HVAC industry and building owners invested a great deal of time and money improving indoor air quality. A condition termed, Tight Building Syndrome was commonly used for well insulated and seal buildings that had various occupants with various health issues due to poor air quality inside the higher energy efficient buildings.

Outside Air: Prior to improved building insulation and "wrapping" methods, the average building was drafty. Outside air came into the buildings through windows that once were capable of being opened, ventilated attic spaces and generally not well sealed doors, walls, ceiling and floors. This provided fresh outside air to dilute the indoor air which tended to accumulate odors from various sources. To reduce the problem of less air leakage, outside air was forced into the tight buildings through air conditioning units.

VOCs: Volatile Organic Compounds in the conditioned spaces of homes and workplaces increased with technology following World War II. Plastics, paints, cosmetics, insecticides, nearly all new products for the home had some chemical produced to make things last longer, smell better, taste better or look neater, than before. Tight buildings increased the concentration of these VOCs in the home and workplace. Many VOCs cause health risks including lung cancer.

Molds: The increased use of air conditioning created its own indoor air quality problems by providing ideal conditions for the growth of molds, mildew, fungus etc. in side the living spaces and workplaces. Because of water condensing on cooling coils, dust collected in air conditioning units and duct work, and the dark conditions, food, water and shelter was provided for the worst culprit of poor indoor air quality, molds.

The improvements in life added risk of various health issues in the home. With the studies comparing risks of one generation with a future generation, a primary cause for health issues, including lung cancers, may not have been considered. Changes in HVAC systems to reduce indoor air quality issues were not considered. All things did not remain equal.

Few things in life remain equal. Is it reasonable to expect that all things will remain equal in our climate? Most rational people do not believe that all things will remain equal. They would like to know the impact of things that do change, on the predictions if things do not change. That is not being a merchant of doubt, that is recognizing change and uncertainty, major factors in making informed decisions.

Sunday, January 22, 2012

More on What the Heck is Down Welling Long Wave Radiation

Probably the most misunderstood perception of mine and many others in the Climate Change debate is the concept of Down Welling radiation or back radiation caused by the greenhouse effect. Per the second law of thermodynamics, heat flows from warm to cold. In truth, NET heat flows from warm to cold, so a colder body cannot physically warm a warmer body. The colder body can reduce the rate of cooling of the warmer body so that it would be warmer than if the colder body were not there.

Some people tend to get carried away trying to tweak the second law by stating that a photon traveling in a random direction can be absorbed by a warmer body after being emitted by a colder body. Quite true, but in the process, the colder body would absorb more photons from the warmer body, because the warmer body is emitting more directionally random photons. Net flow will be from warmer to colder, period.

On the surface of the Earth, photons from the colder sky do impact the surface on rare occasion. More frequently they impact molecules closer to their physical location and temperature. Since the sky has a temperature, it emits photons and a large percentage of those photons travel toward the warmer surface. That direction of travel is called Down Welling Long Wave Radiation (DWLR), but where that DWLR impacts changes with atmospheric conditions.

Why many disagree or misunderstand my perception of DWLR, is the result of my choice of a thermodynamic frame of reference. I live on the surface, the surface is my frame of reference. So how can this be controversial?

The base line for determining the magnitude of Greenhouse Effect (GHE) is an Earth with no Greenhouse Gases (GHGs). That thought experiment Earth still has an atmosphere and still has an albedo, or reflection of incoming solar energy. My visualization of that Earth is a semi-solid sphere surrounded by a more fluid atmosphere. Since there is no GHE, both the surface and the atmosphere would emit radiation, only less of the surface radiation would be absorbed by the atmosphere. The atmosphere, which would have a high viscosity since its rate of radiant cooling would be lower than the surface, would be nearly isothermal or approximately the same temperature at every level due to conductive heat transfer from the surface to the atmosphere.

Standing on the surface of the no GHG Earth, you would measure the same temperature in all directions. That temperature would be approximately 255K degrees, which would emit approximately 240Wm-2 in all directions. That is the no GHE zero DWLR value. If you prefer, the 240Wm-2 would be the background radiation value. That is unique to my frame of reference. A Top of the Atmosphere (TOA) reference would make assumptions that the thickness of the no GHG Earth atmosphere is negligibly small. I disagree.

With GHGs, the surface temperature on average is about 288K degrees with an energy flux of approximately 390Wm-2. The surface impact of the GHE would be 390Wm-2 minus 240Wm-2 or 160Wm-2. Since the source of that DWLR is not the surface, but some point in the atmosphere, the source value of DWLR would be greater than 160Wm-2, if it is indeed a true source of reflected energy averaged over the entire atmosphere. Depending on the altitude of that source of DWLR, the value would vary.

Since the source is likely in the lower atmosphere, near the average mass of the atmosphere, I estimate the average DWLR magnitude as approximately 220Wm-2. That value is based on my estimate of the average emissivity between the surface and that point, 160/220 equals 0.73 or the average emissivity from the DWLR source to the surface. There is no perfect energy transfer, so there will be energy lost in the transfer, if DWLR is a true source of energy.

Since we are comparing a real world to a thought experiment world, the real world values would have to accurately compare to the thought experiment values or there is apples and oranges being mixed.

Many chose a TOA frame of reference. From that perspective, the emissivity is approximately 0.61 which would require, 160/0.61 = 260Wm-2 of GHG produced DWLR or as some seem to think, a nifty power series manipulation to arrive at approximately 330Wm-2 at some point near the TOA.

As long as the demands of the choice of frame of reference are carefully met, either can produce accurate results. Mine, IMHO, is easier to maintain, more flexible and provides more information for the actual surface.

Part of that information is the tropopause temperature. In order to warm the surface, the GHGs would have to cool the tropopause. Increasing the surface temperature by 33C from 255K to 288K would decrease the tropopause from 255K to 222K, which would be approximately -51 C degrees. Which is about the range of the average tropopause temperature after allowing for all the approximations. In order to meet the requirements of conservation of energy, 240Wm-2 of DWLR would be the maximum limit of the GHE. That would be the equivalent of perfect insulation by all GHGs. In order to exceed that limit, the tropopause would have to start warming the stratosphere. While that is possible, the amount of additional GHGs available appear to limit that possibility. That I am working on, what is the realistic limit?

Saturday, January 21, 2012

The Speed of Second Sound

While climate change junkies quibble over which theory is the most fun to bash or support, I keep thinking no one has proposed a method that can even come close to ending the controversies. This has lead me to what is either outside the box creative problems solving or the nut house. Maybe a little of both. Relativistic Heat Conduction, RHC, is the part most people think is nuts.

One of the controversies of RHC is the speed of second sound. The limit on the rate of Phonon flow in a media. The Phonon is a hybrid, thermal quantum stuck between being a photon or an electron only a little on the slow side. So RHC modified their heat equations to include a C squared term, just like the big boy relativity equation with the little c squared term where the little c is the speed of light.

Photons and phonons are different in that photons are accepted as being real, though theoretically difficult to describe and photons are just plain theoretical, but their existence would simplify things.

Real or not, a quantum of thermal energy is something that can be useful. The speed limit for that quantum in a media would be nice to know. The momentum of that theoretical quantum limited by some characteristic of its media would also be nice to know. With some standard for the phonon flow characteristics, derived from basic thermodynamics, we have something that could be directly compared to the real photons.

Real photons leaving the Earth would love to travel at the speed of light, but they can't, even if they are not absorbed by gas, liquid or solid molecules. When they are absorbed, that is a rather dramatic change in velocity. Since a photon has no mass, only momentum in its particle disguise, there is an assumption that absorption energy lost to the molecule is very small with respect to the energy of the photon. With high energy photons, reflection is assumed to be perfectly elastic. It is after all, very small with respect to the energy we can measure.

Low energy photons, are more likely absorbed than reflected. Since energy and mass are related, the massless low energy photon has less masslessness than a high energy photon. The low energy photon's velocity from the surface to space is reduced by a fraction of a second because of the refraction, absorption, emission and collisional transfer of its changing energy and degree of masslessness.

At some point, the theoretical masslessness of the real photon is likely to approach the finite speed of second sound limit of the theoretical quantum called the phonon.

Now here is the fun part, the finite speed of second sound is controlled by the density of the media which is influenced by gravity that has its own theoretical quantum called a graviton. Now wouldn't it be a pip if the low energy photon, phonon and graviton all converge on an energy and degree of masslessness that was finite?

That might even explain some of the other weirdness in the universe. Like solar wind particles getting a boost from a theoretical nearly massless quantum of energy which would produce a higher velocity with one relative impact and a lower but still high enough for escape velocity from another relative impact.

All of this is of course just musings. I do though think it would be fun to build a model based on the Kimoto equation and RHC to see where it might lead. After all, we have a crap load of data, why not have fun with it :)

Friday, January 20, 2012

That Dang Cartoon, Again!

It seems like every time I turn around Trenberth and Kiehl's or Kiehl and Trenberth's energy budget gets brought up. It is still a cartoon, it is still an estimate and it still is based on data not accurate enough to prove or disprove anything.

The latest version, at least to my knowledge, is here, http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/BAMSmarTrenberth.pdf

It still shows both the daylight averages and the daily averages, so everyone seem to want to combine the two which is double dipping. Everyone is still saying that the surface absorbs about 240Wm-2 and emits now 396Wm-2. That is not what is happening.

If you look at what the surface is absorbing from the sun, that value is 161Wm-2. Since the surface average temperature is about 289K degrees, it should be emitting 396Wm-2 if it were a true black body. It not a true black body, it is close to a true black body. Since water covers most of the surface, the average emissivity of water is approximately 0.995 or 0.5 percent less than perfect. 0.5 percent of 396Wm-2 is 2Wm-2. So right from the get go, there is 2Wm-2 of error while trying to determine 0.9Wm-2 of radiant imbalance or warming due to CO2. The emissivity of water increase as turbulence increases, more wind more heat loss. Natural variation in circulation of the atmosphere and oceans change the amount of heat loss per unit time.

Would there be missing heat? No way there could not be missing heat! Get over it, it is not easy to measure, and no, the assumption of ideal black body radiation is not valid unless every calculation is based on perfect black body emissivity. Then, the difference be between observation and calculations, i.e. models, gives some indication of what is happening.

So we have an imperfect value for both the average solar absorbed by the surface and the average emission of the surface. Since the surface absorbs about 161Wm-2 and emits about 396Wm-2, 235Wm-2 if that is the true value, has been stored at some time in Earth's history and/or manufactured internally by the Earth.

That, 235Wm-2 is the atmospheric impact on the surface, less whatever energy the Earth generates internally. That internally generated energy is assumed to be negligibly small. That internally generated energy includes, geothermal, biological (people, plants and animals produce heat), combustion of biological and fission of nuclear materials, totaling approximately 0.17Wm-2, or about 20% of the Estimated 0.9Wm-2 radiant imbalance due to CO2.

The atmosphere absorbs solar energy also, 78Wm-2 on average per the Trenberth energy budget. That also has a margin of error greater than the 0.9Wm-2 imbalance the Trenberth is attempting to illustrate in his cartoon.

The global average surface temperature, which is the clue that there is global warming, has a margin of error of optimistically, 0.1 degrees C. That is a +/- 0.54Wm-2 error possibility while attempting to determine 0.9Wm-2 of radiant imbalance.

Is there missing heat? There is heat missing everywhere. Trenberth's travesty is he thinks it is a travesty that there is missing heat.

Now let's think night time. The cartoon shows 17Wm-2 of "thermals" rising dry air due to conductive warming at the surface carried aloft by convection. That value was approximately 24Wm-2 the last time Trenberth drew a cartoon. The latent heat flux is 80Wm-2 on this cartoon where it was 78Wm-2 on the last cartoon. The radiation to free space is 40Wm-2 on this cartoon where it was 40Wm-2 on the last cartoon.

The most referenced estimates of Earth's Energy Budget, the Keihl and Trenberth cartoons, are so bad that they are not even included in Wikipedia!

So can gravity cause a significant impact on climate? Yes, it most certainly can. The question is, how significant may it be? If it can be on the order of 0.2Wm-2, it is significant. 0.2Wm-2 in terms total energy is approximately 1.1 E+8W/sec. About the same amount of energy released to the atmosphere, by geothermal, tidal change and wind friction, all of which are influenced by gravity and orbital forces.

Update: A PhD on one of the blogs mentioned, that K&T is a useful cartoon. If I want to criticize K&T, I should offer a better cartoon. I started one. That was just the NASA energy budget with values based on more current observations. My estimate of down welling longwave radiation wasn't a big hit, though.

My problem it seems, is separating background temperature from DWLR. If the Earth didn't have greenhouse gases, our atmosphere would be nearly isothermal and highly viscous. Standing on what now is the surface with your non-contact thermometer, you would measure the same temperature in all directions. That would not be back radiation, it would be "back ground" radiation. Once the greenhouse effect is turned on, the difference at the surface would be the impact of the down welling long wave radiation, about 160Wm-2. Since the origin of the DWLR would be above the surface, I estimated the the maximum DWLR value, the average radiant layer of GHGs, would be approximately 220Wm-2 near the tropopause with approximately approximately 100% of Eout or 240Wm-2 the absolute maximum value that DWLR could have. If you correct for local temperature when measuring the sky at night, you may come up with about the same estimate.

Thursday, January 19, 2012

The Coming Ice Age? the Illusion

If a theory works, it works for all applications from all perspectives. This still does not prove a theory, just proves it is difficult to disprove the theory. Arrhenius' greenhouse gas equation is a result of his attempting to develop a theory that carbon dioxide causes the ices ages. His theory didn't make it but was revived to explain global warming. Warming caused by CO2 correlates with concentrations somewhat, but there are exceptions. So Arrhenius' equations are not close enough to be a solid theory. To understand why, the true magnitude of the impacts on the atmosphere and oceans need to be better understood. As the magnitude of one impact decreases more much smaller impacts by other physical variables become more significant. It becomes a very interesting puzzle, resembling chaotic changes due to unknown impacts. Chaos is explainable. The inertia of one process does not match the inertia of another. That causes the slower process to over run its ideal equilibrium for conditions set by the changing faster process. So if chaos is explainable, it may be calculable to a degree that it may be predictable.

A stumbling block to predicting chaotic patterns is that the data available is not perfect. It has its own chaotic nature. To make sense when comparing two chaotic data sets, a common initial state is required. The one biggest hurdle in climate science it that initial state, the Ice Ages.

The solar cycles have the greatest correlation with temperature over the Ice Ages, but the degree of change in solar impact is smaller than required to produce the estimated temperature change of the ice ages. Something is wrong if the chaotic nature of climate is to be determined to a degree that it is predictable. That out of range value is the Vostec ice cores.

One of the more significant orbital variables that changes the solar impact on Earth's climate is axial tilt. The Earth tilt changes a few degrees over a period of 14,000 years. As the angle of tilt changes, the wobble on the axis changes. Both of these orbital changes are internal to the Earth/Moon orbital system. Changes in solar output and changes in the distance from the sun during orbit are external factors. The internal factors are key to understanding climate.

The ice ages are characterized by ice, of course, huge glaciers that covered larger portions of the northern hemisphere. That change in the mass distribution and the altitude of the mass changes the tilt and wobble of the Earth. As ice mass builds in the Canadian region and upper United States, the center of rotation shifts toward that center of mass. This region has the greatest impact on rotation because it has a higher average altitude and sufficient land surface area to support the mass.

At the southern polar region, the Antarctic continent is more nearly centered with the current center of rotation. Once the accumulated northern hemisphere ice mass is great enough, the southern pole shifts away from the center of the Antarctic continent. When the maximum tilt occurs, the precipitation patterns change. More ice builds on the Antarctic continent and more rain falls in the region of Siberia. Since Siberian rivers drain to the Arctic ocean, when those rivers are blocked by glacial ice, the Siberian region become a huge inland lake. That lake changes the precipitation pattern in the Northern hemisphere, increasing the rate of snow and ice accumulation in the Canadian/US region.

What happens at this point is that the region that the Antarctic and the Arctic draw their moisture from changes. As the Australian land mass become cooler with the shift, the average temperature of the oceans feeding the Southern pole decreases. This changes the average temperature and CO2 concentration recorded in the Vostec ice cores. The question becomes, how indicative of global average temperature are the ice core records?

During interglacial periods, the ice cores indicate that CO2 concentration was about 280PPM and that temperatures were about 8 degrees warmer. The concentration of CO2 can be calculated to be approximately controlled by a water temperature of 300K degrees. That is an example, not something that should be taken as a solid value. During a glacial period, the CO2 concentration is approximately 190PPM. That can be calculated to be associated with an average water temperature of 280K degrees, also an approximation. So during an interglacial, the moisture provided to the Antarctic could be supplied more by the mid latitudes and during a glacial period, more by the lower latitudes. That agrees with the shift of the southern pole from the center of the Antarctic land mass to the Eastern edge where more moisture from cooler oceans would be included in the average precipitation.

If this is the case, the the Vostec ice cores are telling a different story than commonly reasoned. That the average temperature of the Antarctic is not changing as much, the average temperature of the oceans providing the moisture is changing. A subtle but distinct difference.

At a maximum glacial period, the inertial mass of the Antarctic and the Arctic regions are competing. A large portion of the stabilizing mass in the Arctic region is the water contained in the Siberian region. Once the maximum inertial forces of the Northern and Southern poles coincide, the likelihood of instability of the glacial dams blocking the outflow of the Great Siberian Lake reaching a breaking point increases. Once that happens, the draining of the region changes the mass balance adding to the tilt and wobble of the Earth's rotation. This decreases the probability of the Great Siberian Lake reforming sufficiently to restore the pseudo-stable axial rotation.

This possibility leads to new possibilities. At the maximum tilt, the geomagnetic field would shift slowly as the internal dynamo tries to find its new equilibrium. The draining of the Great Siberian Lake alters the internal dynamics forcing the internal dynamo to seek a new equilibrium. If the rate of change is enough at the maximum axial tilt, the shift can cause a geomagnetic reversal.

The main question involving CO2 is how much impact it has. Without including positive feed backs, the impact from 190PPM to 420PPM is approximately 2.5 C degrees. The decrease in CO2 with such a small change in temperature by ocean absorption is much smaller. So the CO2 concentration in the Vostec ice cores is unlikely due to a change in global temperature, more likely a change in biological CO2 utilization and glacial sequestering.

If this theory is correct, that internal variability determines the ice ages, then the impact of man on climate is significant and significantly different than theorized. Man has more impact on the rotational tilt of the Earth by removing accumulating snow ans ice than on the atmospheric physics. The balance of both should be considered.

Saturday, January 14, 2012

The Never Ending Debate

There are as many theories as there are mechanisms in climate science. So the debate will never end. I would be nice to have a starting point to, at least work from, so I may be able to better understand some of the more exotic theories.

I start with a ball in space with a surface temperature of 288K degrees and see what happens. Not much to work with here so there should be little to debate. Space has a temperature of about 3K degrees. The tropopause has a temperature of about 213K degrees. If you consider that the average minimum temperature is less than the starting temperature, there is an average rate of cooling by all thermodynamic means. The average minimum surface temperature of the Earth is... it doesn't exist.

It has been used by botanists to determine change in plant growth with global warming. It is not a big glossy iconic banner of climate science doomsayers though. Nocturnal global average temperature is rising greater than the average global temperature is rising. How much doesn't seem to be all that important to some, but it is the strongest "signature" of radiant impacts on surface temperature. Global average maximum temperature is not rising at the same rate. That would be the strongest "signature" of the Earth's response to radiate enhanced global warming. We only get to see the average, not the average of the averages. this is an interesting omission due to the complexity of determining just the average. Not a simple problem it seems.

A way around that, not great but somewhat informative, is to use the average cooling rates. This is tricky since the exact rates are complex, but we do have estimates from before NASA got kicked out of the Earth Energy Budget business thanks to currently not so trust worthy alarmists. NASA had a budget that indicated that conductive (a combination of conductive and convective) was 7%, the Latent (possibly missing the sensible portion of latent cooling) was 23%, and radiant total was 21%, of all of the energy absorbed by the surface from solar. Assuming these are reasonable proportions of the nocturnal emission, then 13.7% for conduction, 45.1% for latent and 41.1% for radiant. These values were for a surface temperature at the time of 288K degrees producing 390Wm-2 of outgoing long wave radiation, which should be approximately the total energy leaving the surface, on average. Then as nocturnal flux, 53.3Wm-2 for conduction, 175.9 for latent and 160.9 for radiant. If you look at it this way you see that all three play a significant role in surface cooling.

The temperature of the tropopause being approximately 213K or -60C degrees, radiates 117Wm-2 by the S-B equation anyone reading this should know and love. So if we subtract that from 390Wm-2 we would have 273Wm-2, the amount of surface energy that interacts with the atmosphere up to the tropopause. 117Wm-2 would be the amount that could be considered the "tropopause greenhouse emission rate". This is a value just for determining the magnitude of the individual out going fluxes on the troposphere.

The big question is the radiant part of this puzzle, so the greenhouse effect of the tropopause only to nocturnal outgoing long wave radiation would be 117/160.9=0.727,or an emissivity of 0.727. Yes, there are other impact in the atmosphere, but we live on the surface.

Typically, the top of the atmosphere emissivity is use to determine the impact of a 3.7Wm-2 increase in forcing for a doubling of CO2. I personally believe that is happy horse hockey. But the math used is a series that reduces to F(surface)=1/(1-emissivity), which would yield, 1/(1-0.727)=3.66 or that the 3.7Wm-2 would be felt as 3.66*3.7=13.56Wm-2 at the surface, for my Tropopause greenhouse effect estimate.

So my estimate indicates more impact of forcing at the surface would be caused. But there is one other thing to consider, as the portion of the outgoing radiation reflected increases, the other fluxes, conductive, latent and the non-interactive radiant would increase. Energy will find its path of least resistance.

NASA kindly included the percentage of outgoing that interacts, 15% which using the ratio as before indicates that 29.4% of the outgoing would interact with the atmosphere, the remainder would be atmospheric window spectra. Some of this interaction is likely above the tropopause, but this is an estimate.

Latent and the atmospheric window fluxes should be unarguably cooling at the surface. NASA indicates that 6% of the 51% emitted from the surface is atmospheric window radiation which would become 11.8% of the surface flux, 46Wm-2, in my example. If the surface flux were to increase by 13.56Wm-2 due to the doubling, or 13.56/390=0.035 or 3.5%, then latent and atmosphere radiation would increase by 3.5%, (175.9+46)=221.9Wm-2, would increase to 229.7 or by 7.7Wm-2. The net warming due to the doubling would be 13.56-7.7=5.86Wm-2. This is just an estimate, don't go crazy yet.

The surface temperature at 395.86Wm-2 would be 289.06K or 1.06C degrees of warming felt at the surface. So how does that compare to the actual measurement since the preindustrial period? Since it is assumed that the impact is a natural log function solely of the change in concentration of CO2, we are now at 390PPm versus the assumed 280PPM initially or ln(390/280)= 0.33 or the ln(2)=0.69 for 0.33/0.69=0.48 or 48% of the way home. 48% of 1.06 is 0.51 degrees. This is a little less that the current warming by about .2 degrees depending on the time period you assume provides the average for the preindustrial period.

There are a lot of assumptions in this example, as there are a lot of assumptions in all the theories. CO2 does have a radiant impact, the question is how much of a radiant impact it has on the surface, globally. I contend that this is just as reasonable an estimate as any other given the complexity, even using the basic methods used to estimate the much greater impact some suggest, only the tropopause is considered as a thermodynamic boundary.

What this does do much better, is give a reasonable perspective of the negative feedbacks that should be expected, all things being equal of course. There are a lot of things I could have done to make this appear to be a more valid estimate. Those would be futile given the uncertainty involved, there is only so much a turd can be polished.

Oh, By the way, the tropopause gets little respect, it is a major thermodynamic boundary that is not changing as estimated. While there a lot of things that are complex, this is one that would seem to be the most critical to understand.

Tuesday, January 10, 2012

Non Equilibrium Thermodynamics and Climate

Dr. Judith Curry has an interesting post on Non Equilibrium Thermodynamics. This is right up my alley since I have been trying to explain why the potential warming cause be CO2 is half of estimated. Maximum/minimum Entropy is the controlling range of the atmospheric effect. Perfect insulation, would be minimum entropy and maximum cooling would be maximum entropy. So this should at least get a few people on the same page.

The main reason I am considered a whack job is the conductivity impact of CO2 on the atmosphere. As I have mentioned before, the thermal coefficient of conductivity for CO2 is non linear peaking at - 20 C degrees. The potential impact that has on climate is obvious to me, but no so obvious to others for some reason.

The paper does address conductivity somewhat but only as mass transfer and mixing. CO2 has some impact there, but the main conductive impact is at thermal boundary layers, mainly the ocean/atmosphere but also at the latent/radiant layer as well.

CO2 enhances conductivity at the ocean/atmosphere boundary layer really two ways. First actual conductivity or collision with warmer molecules. CO2 has twice the sensible heat capacity of standard air at -20C and about 5 times at 20C degrees. Also by photon absorption and collisional transfer to other gas molecules.

Of course, CO2 is just another molecule between boundary layers, all though it can transport more heat than the other molecules save water vapor due to its latent heat.

The tougher part to explain is the latent/radiant boundary layer enhancement. Here CO2 can only direct half of its absorbed energy generally down via emission, but all of its absorbed energy via collision just as at the surface boundary layer. Both transfers would heat the surrounding gases increasing convection. Should that heat involve a water phase change, there is the major enhancement. This is where I need better Maximum Local Emissivity Variance information.

With the Non Equilibrium Thermodynamics principals in mind, perhaps some may notice that CO2 does have a significant impact on both maximum and minimum entropy.

I am still at a loss on how explain the 65Wm-2 sink, which I have not found a paper yet that addresses. Should that get resolved, then I may be able to get the modified Kimoto equation somewhat accepted since it explains pretty much all this crap.

Blending

I started this blog because on what used to be my fun blog, Our Hydrogen Future, there were too many external influences on hydrogen as a transportation fuel and energy storage means that I had begun to discuss. A hydrogen economy is pretty complex. So I wanted to separate the physics from the politics and bring more of the physics portion to this blog. There was too much blending of issues which detracted from the issue, hydrogen.

Here, I can't get too far away from politics either. Politics is a major factor in every part of our lives, even science. The more complex the issue the more politics will be involved at some point. That is not good, bad or particularly unusual, it is just life.

This post on blending, could be on either blog. The advantage of hydrogen is it allows the blending of various technologies using one of the basic building blocks of the universe. Any form of energy can be used to manufacturer hydrogen. At differing efficiencies, but generally to improve overall efficiency if it is not the primary objective, but the catch all energy pigeon hole.

The fear of catastrophic global warming due to greenhouse gas emissions was an excellent motivation to look to hydrogen for future use, even if depleting conventional fuel resources and declining energy security was not.

It seems people don't think that way, for some reason, people have become much more binary, yes, no or good, bad in their thinking. Action will only result from one of three situations in that case, yes wins, no wins or some blend or compromise develops. I am a centrist or blender by nature. It is not difficult for me to read the writing on the wall and know that the truth is near the middle, at least initially.

The EPA MATS is a perfect example of "no" winning. A very caring and concerned group of people that want to protect the world selected one potential danger, coal, and attempted to sentence it to death with regulation. Their binary world believes in perfection and that by eliminating every perceived threat, one at a time, the perfect world will evolve. If a perfect energy source existed, they would be on the right path to realize their vision of perfection without upsetting the energy applecart. One of the darlings of the "kill coal" movement is biomass. Biomass is not without its ugly realities. So the EPA MAST will kill more biomass production, as a percentage of total production, that it will coal. I would think that is an undesired consequence.

The "yes" crowd, in the eyes of the "no" crowd only want more damaging coal to be used because the "yes" crowd are selfish, ignorant and mean people. The "yes" crowd may be all of that or they may not be, it does not really matter because the reality is if they have their way, the middle, blending would be missed.

From an engineering point of view, efficiency is the name of the game. Making more with what you have available. So as an engineer, I would not eliminate any tool for improving efficiency, that includes nasty coal.

Climate science has the same yes, no or good, bad conflict. The middle ground is lost on most other than the engineers and a few other groups. We know about blending.

From an engineering perspective, there are short term "solutions" that may lead to longer term "solutions" which may lead to even longer term "solutions". Solutions in quotes, because engineers know that there are likely no "true" "solutions" for the ultimate future because no one is that smart. We just know that improving efficiency can lead to more improvements. That is what we do, attempt to optimize efficiency within the constrains of the real world, physics, finance and political limitations. In other words, we are used to dealing with overly optimistic and overly pessimistic assholes.

Sunday, January 8, 2012

Venus Greenhouse Effect with Geothermal Boost

It is a lazy Sunday. Since I get a little flack over thermal conductivity needing to be considered in Earth atmospheric physics, I thought I might revisit Venus, the granddaddy of greenhouse effects.

Size wise, Venus and Earth are close, atmosphere wise they are two different worlds. Earth has a total solar irradiance of about 1367Wm-2 and Venus about 2614Wm-2 per NASA Planetary Fact Sheet.

Neglecting the difference in rates of rotation, since both are spheres, the average surface irradiance would be 1/4 or 341 Wm-2 for Earth and 653Wm-2 for Venus. Without any reflection or greenhouse effect, the surface temperature of both would be on average about 278K for Earth and about 327 for Venus, using S-B for a perfect black body. If both had the perfect greenhouse effect, I contend that the bond albedo would be 1, or no energy escapes the atmosphere. Since both planets receive sunlight on only one side, perfect insulation would mean that would lose no heat and that the most heat either could gain would be the average solar irradiance. Earth's maximum average temperature would be 556 K and Venus would be 654 K degrees. This, by the way is being a bit generous.

The maximum average irradiance of the sunlit side of the planets would be 682Wm-2 for Earth and 1306Wm-2 for Venus. Their maximum average S-B black body temperatures would be 331K for Earth and 389K for Venus with no atmospheric effect and double, i.e. perfect insulation, or 662K for Earth and 779K for Venus.

Using what I think are the more realistic estimates, since Earth radiates to space about 240Wm-2, it has a black body temperature of about 255K, so 556-255=301K for its maximum GHE, Venus radiates 65Wm-2 for a black body temperature of 184K degrees. 654-184=470K for its maximum GHE. For you doubting Thomases, the absolute maximum would be 779-184=595K degrees for Venus.

The average surface temperature of Venus is about 737K degrees, both estimates are significantly less than the actual average surface temperature. For this reason, I say Venus' surface temperature is enhanced by geothermal energy from its slow cooling core. The amount of enhancement would be between 142K for the impossible case and 267K for the barely possible case.

James Hansen is the authority on Venus. The only possible challenge I have to his Venus Greenhouse theory is the limit of radiant impact of CO2. Given the available solar energy, I contend that the maximum GHE impact on Venus is 470K degrees that could be felt near its surface. Since geothermal energy is likely adding to Venus' surface temperature, I suspect that the concentration, pressure and temperature at the surface cause it to be highly thermally conductive.

This leads me toward two thoughts, there is a limit to CO2 forcing that may be determined from Venus which is significant less than Hansen estimates and that the impact of CO2 enhancement of atmospheric thermal conductivity on Earth is not negligible.

Now, there is something else interesting about Venus. If it truly is experiencing the maximum GHE, it still radiates 65Wm-2 for a black body temperature of 184K degrees. 184K is -89 C degrees, which happens to be the approximate temperature minimum of the Antarctic and the Tropopause. This is where people believe that I have entered the Crackpot Zone. See, I also think that Earth has that limit to its maximum Greenhouse effect.

Since Earth emits about 240Wm-2 at approximately 255K as a black body, the lowest it could emit if I am on the right track is 65 Wm-2. Unfortunately, determining exactly why that limit exists for both Venus and Earth is not all that easy. On Earth I would expect the geomagnetic field plays a role. Venus doesn't appear to have a geomagnetic field of any magnitude. Both are influenced by the Sun's magnetic field and solar winds. Venus has no defense from the solar winds and the Earth's magnetic field somewhat offsets the solar field.

In any case, with a 65Wm-2 limit, albedo would have to increase to maintain the TOA energy balance. Unlikely Venus, Earth still has water in significant quantity. Where that water can have the most impact is in the atmosphere. So water will limit Earth's GHE, not CO2. Since water vapor cannot exist in any significant quantity above the tropopause, the impact of CO2 above the tropopause would be near negligible in the quantities that can be produced on Earth. Unlike Venus, Earth's Greenhouse Effect limit is the tropopause and its odd limit of approximately 65Wm-2.

So just anyone thinks I am running down a rabbet hole, I may be, and it appears to be solar in origin.

Wednesday, January 4, 2012

Pressure, Density and Global Warming

There is a growing group of scientist attempting to pin the lack of warming tail on atmospheric pressure. They are right but wrong because they assume that pressure is the horse and surface temperature the cart. Local temperature and pressure and that potential temperature/energy makes up the wagon.

If the surface emits photons at 270K degrees in the spectrum of a greenhouse gas, the energy absorbed by that greenhouse gas is dependent on the temperature of the gas. Simple right?

Well, the total energy of the gas absorption is dependent on the pressure of the gas which is dependent on the density ie thermal mass of the gas layer.

Think about the thermosphere. It has a high temperature and no significant heat. No significant heat because it has no significant thermal mass. So the average potential temperature of the average radiant layer of the gas has to be considered.

Potential temperature is commonly used in meteorology. It relates the temperature of a parcel of air at one temperature and pressure to another. CO2 absorbs photons of the right wave length and energy to fill its absorption spectrum. The total energy of the absorption spectrum is approximated by it Planck envelop which is related to the fourth power of temperature. As the gas layer grows colder the energy it absorbs decreases rapidly, by the fourth power of the temperature.

This is fairly well known in climate science and expected to be observable as greater warming in the upper part of the troposphere. The troposphere radiant layer should be about 1.2 degrees warmer than the surface emitting the outgoing long wave radiation absorbed by the greenhouse gases in that layer. According to the theory of global warming, that should produce about 1.0 degrees of warming at the surface. Only one problem, the surface is not emitting the photons absorbed by the average radiant layer.

Because the surface atmosphere in most regions is effectively saturated by the abondance of greenhouse gases due to the higher density and pressure of the lower atmosphere, energy absorbed by the surface ghgs is transferred by collision warming the air mass which cools adiabatically with the reduced density of the air mass so it rises to a new lower pressure level. That air mass or parcel of air contains the same energy in a larger volume. If it were forced back to the surface, the temperature felt at the surface would directly proportional to the energy of the parcel. But, that is an adiabatic process, not added or removed, the only thing that changes is the pressure. When that parcel at the lower pressure emits photons, it is no longer adiabatic, something changed. It gave up some of its heat to something outside of that parcel.

The photon emitted from the parcel has an energy dependent on the temperature of the parcel at that altitude. If that parcel is higher in altitude meaning lower in temperature, the energy emitted reduce by the fourth power of the temperature from the source of the absorption to the temperature of the emission. At the surface, the free path length is extremely short so this is not a significant factor. As altitude increases the free path length increase so it begins to become significant.

When that greenhouse gas is not a gas, but a liquid or solid version of water, latent heats leaps frog the mean free path length. Now it gets interesting.

The potential energy of the packet of water or ice crystals is much greater than the energy of the gases at the lower pressure. The gases at altitude can only absorb in their spectrum which has wave length AND energy constraints. The gas can absorb a photon at the proper wave length with greater energy, but can only emit photons based on its energy envelop, on average, constrained by the fourth power of its temperature.

While this is all very well known, the equations used by climates scientists do not appear to include the impact of the average radiant layer of the greenhouse gases which increases in altitude with warming. The Planck response limits the amount of warming until the entire radiant layer reaches sufficient energy to return sufficient energy to be felt at the surface.

Natural convection, diffuses the energy absorbed vertically and horizontally increasing the rate of cooling in the average radiant layer. While temperature can be used in the laws of thermodynamic, it can only be used when the energy and the work done by that energy are consistent with the change in temperature. That is not the case in our complex climate system.

Energy is fungible, the work done is not. So the potential temperature of the climate system, which includes the energy of that parcel of atmosphere, must be considered.

Tuesday, January 3, 2012

Doubling of Carbon Dioxide Does What?

Viewing the typical conversations on the climate science blogs I was struck by the humorous logic used by the doomsayers. Since RealClimate is the ultimate source for all things blog climate science let's see what their logic implies.

Since CO2 lags warming in the Vostek ice cores, realclimate stated that about half of the warming from the glacial to interglacial periods is due to CO2 increasing in concentration. Realclimate are fans of the Arrhenius equation for global warming where the increase on CO2 has a natural log relationship with temperature. So some value times ln(c1/co) is the increase in temperature due to CO2.

The glacial periods had an average concentration of 190PPM CO2 that increased to about 280PPM at the pre-industrial part of the interglacial. ln(280/190) equals 0.39 so 0.39 times some value would be about half of the warming from glacial to interglacial due to CO2. Since the pre-industrial period, CO2 has increased to about 390PPM. ln(390/280)equals 0.31 which has caused a maximum of 0.8 degrees C of warming. Let's be generous and call that one whole degree and forget about the little ice age. So if 0.31 caused one degree, the multiplier needed for determining the impact of CO2 concentration change using Arrhenius' formula would be 1/0.31 or 3.0 allowing generous uncertainty.

So if we doubled from our present concentration to 780PPm, 3.0ln(780/390) equals warming would increase by 2.07 degrees. Obviously, CO2 would require a lot of help to generate more than 3 degrees warming from the pre-industrial conditions.

From the glacial to now, CO2 got a lot of help since realclimate says that only about half of the warming is due to CO2. From the glacial until now, 3.0ln(390/190) equals 2.15 degrees of warming. Now we have a few options:

If I used the half of the actual increase of about 0.7 degrees instead of 1 full degree for the pre-industrial to now that would make the impact of CO2 pretty small. So let's use the full observational increase for the industrial age warming of 0.7 degrees. The glacial to interglacial was about half due to CO2 so let's say that during the glacial period the Earth was twice 2.15 or 4.3 degrees cooler. Thanks to man screwing things up, now is 4.3 plus the 0.7 or 5 degrees warmer than it was during the last glacial period. So from then to now the factor would be 5 and not 3 for the Arrhenius equation. 5ln(780/190) equals 7.06 degrees warmer at 780ppm than it was at 190 during the last Glacial period including a lot of help of at least half. So with a lot of help, at 780ppm the Earth would be 2.06 degrees warmer than it is today.

So if I use the Arrhenius equation and the estimate by realclimate from the last glacial, there is only 2.06degrees more warming "in the pipeline" if CO2 peaks at 780PPM. Let's say that man is completely stupid and we increase CO2 to 1000ppm. Then 5.0ln(1000/190) equals 8.30 or 3.3 degrees more possible warming including the same help that climate received from the last glacial until now. Since climate didn't get that much help from 280 until now even, assuming that the little ice age was the average temperature, do ya think that more than 3 degrees for an increase to 560ppm might be a little bit over estimated?

That is the controversy. Not if CO2 may warm the Earth but how much.

Monday, January 2, 2012

Unified Climate Theory and Atmospheric Mass?

There are a couple things floating around the blogosphere that seem to be getting effect and cause mixed up. Pressure and density in the atmosphere and the greenhouse effect.

Pressure, density and composition of the atmospheres are important parts of the atmospheric effect of course. With no atmosphere there would be no effect and the more atmosphere the greater the effect. Greenhouse effect and atmospheric effect are not identical.

Basic heat transfer is conductive, convective, radiant and in Earth's case, latent with convective.

Conductive transfer in gases is based on the collisions of the gas molecules. More collisions, more transfer of heat. A denser atmosphere means more molecules are available for collision so more heat can be transferred. Pressure, mainly the pressure gradient once you have density, controls the convective or rate of rising warmer air versus cooler air. More density implies higher viscosity which means a slow rate of convection.

Density and composition impact radiant transfer. If the atmospheric composition includes radiant absorbing gases in the Planck envelop of the black body temperature of the surface, great density of those gases maximize absorption of those wavelengths.

Latent transfer leap frogs regions of the atmosphere as convection moves the latent heat accumulation from one temperature/pressure to another where the latent energy can be released or gained.

That is ridiculously simple because that part is ridiculously simple.

Radiant emission and absorption are limited by the absorption spectrum of the combined atmospheric gases. If CO2 absorbs 15 percent of the radiation from the surface it can only impact 15 percent of the of the outgoing radiation, all other things left constant :) This is were it is not longer so simple.

In a dense portion of the atmosphere, molecules excited by absorption have little chance of emitting in the spectrum. They collide with other molecules so there is conductive transfer of that energy which may broaden the spectrum of that packet of energy. In other words, spectral broadening is dependent on conductive transfer potential.

Venus has a dense atmosphere of CO2 at high pressure. At low altitude, the path length of radiant transfer is basically nil. The lowest millimeter of the atmosphere can absorb outgoing radiation, but conductive transfer rules in the highly viscous surface atmosphere. With conductive transfer ruling, the lowest portion of the atmosphere is effectively iso-conductive. Radiant transfer is small versus conductive transfer. The number of collisions serves to broaden the absorption spectrum of CO2 to the point that all available lines of absorption are filled. The CO2 at the surface is nearly a perfect mirror for all wave lengths. On the whole, Venus reflect 60 percent of the visible light and absorbs nearly 100 percent of the longwave radiation. In a way, this is runaway atmospheric effect, not necessarily runaway greenhouse effect.

The greenhouse effect is amplification of solar in coming radiation after absorption by the black or gray body. Atmospheric effect is the retention of any energy that increases surface temperature above the evident outgoing longwave implied temperature. If Venus were only at maximum greenhouse effect, its surface temperature would be approximately 440K, but its surface is nearly 770 K degrees. The additional 300 or so degrees is due to the internal core temperature being felt at the surface by nearly perfect atmospheric insulation.

This can be incorrectly thought of a greenhouse effect. It is incorrect because not enough solar radiation reaches the lower atmosphere to be absorb and then amplified by the greenhouse gases, CO2. In the higher atmosphere where the temperature approaches 440K, the greenhouse effect begins. Call it a matter of semantics, but Earth illustrates the reason for the terminology.

On Earth there is the addition of latent heat because of water. Considering that Earth has a surface black body temperature of 288K degrees it would radiate at 390Wm-2. Approximately 80 Wm-2 of that value leaves the surface as latent heat, so the effective surface radiant flux is 310Wm-2. Convection accounts for approximately 25 Wm-2 of surface cooling. Of the remaining 285Wm-2, approximately 40 Wm-2 does not interact with greenhouse gases above the troposphere, this is the atmospheric window. After removing that 40 Wm-2, 245 Wm-2 is emitted from the surface that may interact with greenhouse gases at varying levels in the atmosphere, mainly near the surface where the density of the atmospheric molecules is greatest. At the surface, conductive, convective and latent all interact to produce surface cooling at a relatively steady rate.

The physics does not end with the heat transfer. The concentration of the gases in the atmosphere varies with temperature of the atmosphere and oceans. Each gas has a vapor pressure so they are attempting to maintain an equilibrium with the oceans concentration at that temperature. Gas molecules with their energy flow in and out of the oceans constantly with changing temperatures and pressures.

When air warms it can hold more water vapor. While the greenhouse gases absorb radiation they transfer that energy with collision, conductive transfer, spreading the warming. The air can also contact the surface absorbing energy via collision. This reduces the vapor pressure for water vapor which gains energy by mainly conductive means to evaporate. While radiant energy from the sun provides the energy for evaporation, down welling or return radiant energy from the greenhouse gases cannot provide the majority of the energy for evaporation. Why?

The same greenhouse gases that absorb surface radiation in their spectra block the return of that radiation in sufficient quantity from more than a meter above the surface. Because collisions broaden the greenhouse spectra and collisions decrease with altitude as pressure decreases, the emission spectrum of the atmosphere changes with density and pressure.

If you compare Earth and Venus, the atmosphere of Earth is less viscous allowing more rapid cooling of the surface by convection while the limited rate of convection on Venus allows more rapid conductive transfer or local cooling. Conduction radiates in all directions just as radiant transfer does only on Venus, conductive transfer is more rapid.

So atmospheric pressure is important, all factors need to be considered or it is no better than the radiant only greenhouse theory.

Dr. Roy Spencer has a post on the same subject, Why Atmospheric Pressure Cannot Explain the Elevated Surface Temperature of the Earth. In his post he notes that weather, tends to cool the surface and that without weather, convection and surface evaporation, the surface may be 70C instead of 33 C warmer. That agrees pretty well with my estimate of what the no greenhouse Earth surface would be.

With no greenhouse gases and an albedo of 0.30 at least of half of that 0.3 would be due to the atmosphere, clouds and solar scattering by oxygen and nitrogen. If the distribution of atmospheric to surface reflection and absorption we the same for the no greenhouse gas Earth, the surface absorption would be approximately 175 Wm-2 which would have a corresponding temperature of about 235K degrees which is 52 degrees C lower than the present approximate temperature. Greenhouse gases account for approximately 52 degrees of warming, countered by 20 degrees of latent cooling and roughly 12 degrees of conductive/convective cooling. All three are in balance so while CO2 will increase the radiant warming some degree, it will also increase the conductive/convective cooling over a longer time scale. Water evaporation will initial increase with the additional CO2 warming attempting to maintain the balance. Reduced solar surface absorption reduces the impact of the greenhouse gases by removing a portion of its source of energy which in turn would reduce water vapor evaporation which is the caveat.

Water vapor cools the surface as it evaporates but is the primary greenhouse gas and also the major source of surface and atmospheric albedo. Changing weather patterns with reduced solar will determine how much cooler the average surface temperature may become with a prolonged solar minimum. If the cooling portion of water vapor feed back is dominate, the conductive cooling will not decrease as much as the radiant and there will be more cooling, possibly another little ice age if the solar minimum is long enough.

So there does not yet appear to be a unified theory of climate. Which is not surprising with such a complex system.