Playing with Atmospheric physics is fine, but really not as entertaining as relativistic physics. On the hydrogen blog I started Phonon Versus Photon thing. Doing the Oh! Antarctica and Carbon Dioxide the not so Well Mixed Gas are going to require some proof of the Relativistic Heat Conduction theory.
I suck at proofs! So this will be a major challenge. I have toyed with a new model of a photon where angular momentum change corresponds to a small but finite change in rest mass. Real mass, reguardless of how small, is not high on the theoretical hit parade for photons. Relativistic mass changes in a mixed gas environment would help explain the radiant to conductive relationship as relative velocities and directions would increase the likelihood of a photon finding the right energy hole in a molecule. Nitrogen has a close vibrational frequency which can be used in lasers, but is highly improbable at atmospheric temperatures and pressures.
Then a few hundred Petajoules/second with peta^n collisions and possibly peta^m absorptions could make things happen. The most likely region to provide some observational data is Antarctica, which has the crappiest data quality. With the new polar orbit satellite, that could change.
One thing that is very interesting in the Antarctic is the surface flux readings. What appears to be interferance from the southern magnetic pole could be related to the weak magnetic properties of oxygen at low temperatures. Liquid oxygen has known magnetic properties so very cold O2 enhancing conduction is much more believable than the nitrogen amplification. Unfortunately, most of this is probably obvious to the people working on this data, so I would be pissing up a rope if I start at the wrong point. It would be a much easier approach, though. Which could help avoid the revised photon model.
Anywho, with some reasonable indication that RHC is acceptable, I may be able to move to the 4C boundary and the conclusion of The Coming Ice Age.
New Computer Fund
Saturday, October 29, 2011
I'm a Little Photon Short and Stout
While Vice Admiral Lucia ponders what possible role thermodynamics might possibly have in the most complex fluid dynamics problem ever posed, I am watching my Gators not lose as quickly as normal to the dreaded Georgia Bulldogs. Losing to UGA sucks!. They are just as redneck as I am and love to rub it in. The little photon ditty popped into my head on a fourth and 19 touchdown pass that caused a UGA groan I heard all the way down here in the Keys.
If I were a photon, would it matter where I am? It certainly would matter to the Gator photon. If he were at the top of the troposphere he would look down and see all kinds ugly UGA water vapor molecules wanting to kick his butt. Being a tough Gator photon, I am sure he would do his best to kick some Dawg butt, but you have to be realistic if you want to remain a good Gator photon. Discretion being the better part of valor and all. The Gator photon would juke left and juke right all the while knowing dropping back to regroup is the better option. It ain't easy for a good Gator photon to get back to the surface.
Some Gator photons, much like myself, may just bash into a UGA molecule just to get him hot under the collar. That may make him jump offsides then the rest of the Gator Photons could kick butt. Those UGA molecules would start upper level convecting all over themselves and forget they are suppose to be defending the dry adiatic lapse rate line. Now tugging on the DALR line from up top is not as effective as pushing from below. But the Gator photons will have to wait until halftime so they can defend the DALR line.
If I were a photon, would it matter where I am? It certainly would matter to the Gator photon. If he were at the top of the troposphere he would look down and see all kinds ugly UGA water vapor molecules wanting to kick his butt. Being a tough Gator photon, I am sure he would do his best to kick some Dawg butt, but you have to be realistic if you want to remain a good Gator photon. Discretion being the better part of valor and all. The Gator photon would juke left and juke right all the while knowing dropping back to regroup is the better option. It ain't easy for a good Gator photon to get back to the surface.
Some Gator photons, much like myself, may just bash into a UGA molecule just to get him hot under the collar. That may make him jump offsides then the rest of the Gator Photons could kick butt. Those UGA molecules would start upper level convecting all over themselves and forget they are suppose to be defending the dry adiatic lapse rate line. Now tugging on the DALR line from up top is not as effective as pushing from below. But the Gator photons will have to wait until halftime so they can defend the DALR line.
Friday, October 28, 2011
Would a no Greenhouse Gas Earth Have an Atmosphere?
This is something I would pass on to a student that partied too much before an exam so he(she) could earn a portion of the grade point average they just lower back and somewhat get back into my good graces. I used to get assigned all kinds of interesting problems that had nothing to do with any course I was taking. Actually, I spent most of my educational life attempting to make up for having more fun than allowed at the wrong time and often the wrong place.
So, even though I don't care to, I am going to pretend I care and attempt to set up the problem. Using basic Wikipedia stuff for Earth.
Surface area of the Earth: 510,072,000 km2 or 5.1e14 m2
Velocity, equatorial rotational: 465 m/s
Mass of Atmosphere: 5.15e18 kg
Composition: 80% N2 20% O2 obviously not exact.
Gravity: 10m/s2
Surface pressure 1bar
Thermal properties: N2 http://www.engineeringtoolbox.com/air-properties-d_156.html Specific heat: 1040 J/kgK Thermal Conductivity: 0.024W/mC Specific Volume: 0.872 kg/m3 Gas Constant: 297 J/kgK
Note: The thermal conductivity of N2 is not zero. Heat will be transferred from the surface to the nitrogen in the atmosphere. Nitrogen has a specific heat 1040J/kgK. Nitrogen is not a very effective absorber or emitter of infrared radiation. Basic take away, it can gain heat, but not as easily lose it to space. The specific volume is 0.872 kg/m3, Gas Constant 297J/kgK Temperature: 254.5K Pressure: 1 Bar But first!
5.15e18kg divided by 5.1e14m2 yields 1000kg/m2. The density of air at 1 bar and -50C (223K) is 1.5kg/m3, so 667 m3 for 1000kg/m2 or 667 meter high atmosphere with no significant heat transfer. Yes, that is air not just N2.
With an average surface temperature of -18C, P1V1/T1=P2V2/T2 yields V2=P1V1T2/P2T1, p1=P2, V2=V1*(254.5/223)=1.14V1 1.14*667=760 meters
So if the radiant loss is not greater than the conductive gain, there would be an atmosphere 760 meters high.
Oxygen absorbs strongly in the UV spectrum. There would be virtually no change in the O2/UV relationship in a no GHG Earth. That energy would be lower due to the lower altitude. That energy would cause upper level convection tending to decrease density at the top of the No GHG atmosphere. That is the creation of the tropopause.
So the Earth would have an atmosphere at least 760 meters high, with a tropopause. Its temperature would be roughly 254.5K at the surface with an unknown lapse rate at this point. This is all it takes, but I may dig out the numbers on the weak emissision of long wave and the O2 absorption of UV to better estimate the altitude.
So, even though I don't care to, I am going to pretend I care and attempt to set up the problem. Using basic Wikipedia stuff for Earth.
Surface area of the Earth: 510,072,000 km2 or 5.1e14 m2
Velocity, equatorial rotational: 465 m/s
Mass of Atmosphere: 5.15e18 kg
Composition: 80% N2 20% O2 obviously not exact.
Gravity: 10m/s2
Surface pressure 1bar
Thermal properties: N2 http://www.engineeringtoolbox.com/air-properties-d_156.html Specific heat: 1040 J/kgK Thermal Conductivity: 0.024W/mC Specific Volume: 0.872 kg/m3 Gas Constant: 297 J/kgK
Note: The thermal conductivity of N2 is not zero. Heat will be transferred from the surface to the nitrogen in the atmosphere. Nitrogen has a specific heat 1040J/kgK. Nitrogen is not a very effective absorber or emitter of infrared radiation. Basic take away, it can gain heat, but not as easily lose it to space. The specific volume is 0.872 kg/m3, Gas Constant 297J/kgK Temperature: 254.5K Pressure: 1 Bar But first!
5.15e18kg divided by 5.1e14m2 yields 1000kg/m2. The density of air at 1 bar and -50C (223K) is 1.5kg/m3, so 667 m3 for 1000kg/m2 or 667 meter high atmosphere with no significant heat transfer. Yes, that is air not just N2.
With an average surface temperature of -18C, P1V1/T1=P2V2/T2 yields V2=P1V1T2/P2T1, p1=P2, V2=V1*(254.5/223)=1.14V1 1.14*667=760 meters
So if the radiant loss is not greater than the conductive gain, there would be an atmosphere 760 meters high.
Oxygen absorbs strongly in the UV spectrum. There would be virtually no change in the O2/UV relationship in a no GHG Earth. That energy would be lower due to the lower altitude. That energy would cause upper level convection tending to decrease density at the top of the No GHG atmosphere. That is the creation of the tropopause.
So the Earth would have an atmosphere at least 760 meters high, with a tropopause. Its temperature would be roughly 254.5K at the surface with an unknown lapse rate at this point. This is all it takes, but I may dig out the numbers on the weak emissision of long wave and the O2 absorption of UV to better estimate the altitude.
It is Not the Destination, it is the Journey
Some people wonder why I am attmepting to solve what is a pretty complex problem without resorting to high powered computer applications and listing dozens of references. Part of the reason is I remember a comment that one climate scientist said, "Our undergraduates solve for climate sensitivity with back of the envelop calculations."
That's fine, if they can do it so can I. To me it is just an entertaining puzzle. Something to do to while away some time. So my goal is to shgow how to solve for climate sensitivity, ACCURATELY, with back of the envelop calculations.
One of the somewhat difficult calculations to do long hand is the S-B relationships. F~5.67e-8(T^4) is a little complicated. So I simplify like this;
Why 5.67?
5.67e-8 is the main constant in the Stefan-Bolzmann equation. I say main, because emissivity has to be considered and the average for space, 0.926 J/K varies as, well, the energy per degree with respect to the radiant properties of the source. One of the things we need to know is the combined emissivity of the various sources and the transmittance of the media between them.
For a CO2 doubling we can assume an object in the atmosphere is a S-B energy source. If it emits energy F at T1, then its difference in energy emitted at T2 would be,
dF/dT=[ 5.67e-8(T2)^4 – 5.67e-8(T1)^4]/(T2-T1),
=5.67e-8(T2^4-T1^4)/(T2-T1)
Instead finishing a formal derivation, let’s just select a reasonable T2=255K and T1=254K then,
5.67e-8(255^4-254^4)/(255-254)=5.67e-8(42.3e8-41.6e8)/1=5.67*(0.7)/1, note that the e-8 and the e+8 can be added which simplifies the result to 5.67*0.7 equals 3.969= dF/dT @ dT254, or 4F/T @ 254K.
Using the same relationship, dF/dT@288K=5.44 or 5.44F/T A 288K
So it should be obvious that the change in flux F is dependent on the temperature of the body and the emissivity of the body. For a point source of CO2 forceing, if the source temperature is ~255K, 4 is a good approximation of the change in flux with respect to a small change temperature and for a point source at 288K 5.44 is a good approximation for the change in flux with respect to change in temperature.
You can simply graph this relationship on appropreate paper, interpolate between the ranges or plug the whole shebang into your computer if that blows wind up your skirt.
The main point is that the surface to TOA relationship is 5.44/4 or 1.35 and for a TOA source with respect to the surface 4/5.44 or 0.74.
The emissivity is one of the larger questions anyway, so by using the perfect black body relationships, I have a back of the envelop type equation that is accurate and very flexible. "Simplicity is elegance" as my old professor from Bell Labs was fond of saying.
Since the Arrhenius equation is based on the Stefan-Boltzmann relationship;
The doubling of CO2 will produce an increase in forcing or resistant to OLR based on the Arrhenius equation equal to Delta F=Alpha ln(Cf/Co) = Alpha * 0.69. The constant alpha is based on the Stefan-Boltzmann relationship for black body radiation. Since I am using the perfect black body approximation I will use 5.67 for alpha. Generally, 5.67 times the 0.926 or ~ 5.25 is used for alpha. Since CO2 is assigned 33% of the greenhouse forcing or .33*152=50.16, a doubling of CO2 would increase that to 55.8 which with 4.53Wm-2 per degree would be 55.8/4.53 = 12.3 degrees versus 50.16/4.53=11.07 or 1.23 degrees of warming at the surface.
How some ever, that at the surface depends on where the CO2 forcing takes place.
Most guys can remember .69 and 5.67 is now burned into my memory.
That's fine, if they can do it so can I. To me it is just an entertaining puzzle. Something to do to while away some time. So my goal is to shgow how to solve for climate sensitivity, ACCURATELY, with back of the envelop calculations.
One of the somewhat difficult calculations to do long hand is the S-B relationships. F~5.67e-8(T^4) is a little complicated. So I simplify like this;
Why 5.67?
5.67e-8 is the main constant in the Stefan-Bolzmann equation. I say main, because emissivity has to be considered and the average for space, 0.926 J/K varies as, well, the energy per degree with respect to the radiant properties of the source. One of the things we need to know is the combined emissivity of the various sources and the transmittance of the media between them.
For a CO2 doubling we can assume an object in the atmosphere is a S-B energy source. If it emits energy F at T1, then its difference in energy emitted at T2 would be,
dF/dT=[ 5.67e-8(T2)^4 – 5.67e-8(T1)^4]/(T2-T1),
=5.67e-8(T2^4-T1^4)/(T2-T1)
Instead finishing a formal derivation, let’s just select a reasonable T2=255K and T1=254K then,
5.67e-8(255^4-254^4)/(255-254)=5.67e-8(42.3e8-41.6e8)/1=5.67*(0.7)/1, note that the e-8 and the e+8 can be added which simplifies the result to 5.67*0.7 equals 3.969= dF/dT @ dT254, or 4F/T @ 254K.
Using the same relationship, dF/dT@288K=5.44 or 5.44F/T A 288K
So it should be obvious that the change in flux F is dependent on the temperature of the body and the emissivity of the body. For a point source of CO2 forceing, if the source temperature is ~255K, 4 is a good approximation of the change in flux with respect to a small change temperature and for a point source at 288K 5.44 is a good approximation for the change in flux with respect to change in temperature.
You can simply graph this relationship on appropreate paper, interpolate between the ranges or plug the whole shebang into your computer if that blows wind up your skirt.
The main point is that the surface to TOA relationship is 5.44/4 or 1.35 and for a TOA source with respect to the surface 4/5.44 or 0.74.
The emissivity is one of the larger questions anyway, so by using the perfect black body relationships, I have a back of the envelop type equation that is accurate and very flexible. "Simplicity is elegance" as my old professor from Bell Labs was fond of saying.
Since the Arrhenius equation is based on the Stefan-Boltzmann relationship;
The doubling of CO2 will produce an increase in forcing or resistant to OLR based on the Arrhenius equation equal to Delta F=Alpha ln(Cf/Co) = Alpha * 0.69. The constant alpha is based on the Stefan-Boltzmann relationship for black body radiation. Since I am using the perfect black body approximation I will use 5.67 for alpha. Generally, 5.67 times the 0.926 or ~ 5.25 is used for alpha. Since CO2 is assigned 33% of the greenhouse forcing or .33*152=50.16, a doubling of CO2 would increase that to 55.8 which with 4.53Wm-2 per degree would be 55.8/4.53 = 12.3 degrees versus 50.16/4.53=11.07 or 1.23 degrees of warming at the surface.
How some ever, that at the surface depends on where the CO2 forcing takes place.
Most guys can remember .69 and 5.67 is now burned into my memory.
A New to Me Computer
I am currently in serious negotiations to acquire a slightly used, full size laptop computer. It is in fine shape, it is only missing one key, the U. I hardly ever use U, so I can work around that.
It seems a demostration of how to make buttermilk southern fried chicken, butter milk biscuits, Basil dirty mashed potoatoes with carmelized corn and butter bean southern succotash, has some value. Since I likes eatin' near 'bout as much as cipherin', it may be a good deal for the both of us.
While it has not been confirmed, Dr. Richard Lidzen of MIT has been nominated as an official Redneck Physicist with all priveldges due such a lofty appointment.
It seems a demostration of how to make buttermilk southern fried chicken, butter milk biscuits, Basil dirty mashed potoatoes with carmelized corn and butter bean southern succotash, has some value. Since I likes eatin' near 'bout as much as cipherin', it may be a good deal for the both of us.
While it has not been confirmed, Dr. Richard Lidzen of MIT has been nominated as an official Redneck Physicist with all priveldges due such a lofty appointment.
The 330 Watt Weirdness
First, I want to apologize for the typographical errors. The little netbook I am using locks up at every opportunity and I have to retrieve autosaved versions and attmept to remember what I previously corrected. Not fun. Hence the donations for a new computer request.
The 330 Watt Weirdness
Climate Sensitivity is defined as the amount of warming that the Earth would experience with a doubling of the amount of CO2 in the atmosphere. Nothing we can do about that, it is the definition.
The Greenhouse gas effect would logically be the amount of warming due to greenhouse gases in the atmosphere. In order to determine the defined Climate Sensitivity, we need to know how much of the atmospheric effect is due to greenhouse gases and how much of that is due to CO2.
The Earth is estimated to be 33 degrees C warmer than it would be without an atmosphere. Is that the greenhouse effect? I don’t think so. Part of that 33C is due to radiant interaction of outgoing long wave radiation and some is not.
In determining that the Earth without greenhouse gases would be 33C colder, it was assumed that 30 percent of the incoming sunlight was reflected, just as thirty percent is reflected on average today. Of that 30% most is due to clouds, snow and ice, all due to water in its phases and more is reflected by liquid water on the surface of the Earth. Clouds, Ice and Snow are part of the atmospheric effect that may be impacted by a change in CO2, but are not directly caused by CO2. The assumption goes that their reflective effects are fully considered, that they cause the colder temperatures that CO2, Water Vapor and other radiant absorptive gases compensate for in warming the Earth to its current average of 288K degrees.
Without this reflective property of water, the Earth would absorb approximately 96% of the 340Wm-2 average energy we receive from the sun on an annual basis. 326Wm-2 absorbed in a stated state would result in a temperature of approximately, 275 K degrees. Using the same ratio of surface and atmospheric albedo 26% reflected by clouds and 4% reflected from the surface as today, we have 238Wm-2 absorbed by the surface and at steady state, that would result in an average surface temperature of 254.5K degrees, via S-B emissivity =1. Clouds and atmosphere depress the surface temperature by 20.5 degrees. Without greenhouse gases, there would be an atmosphere and clouds are required for approximately 87 percent of the reflected sunlight.
Why would there be an atmosphere? Because the Earth is not at absolute zero and there is a wealth of nitrogen and oxygen available with thermal conductive and convective properties. Not including this simple physical reality makes it impossible to determine how much impact CO2 has on surface temperature. By definition, that is what is required to be determined. And by definition it has to be determined at the surface.
Co2 is estimated to cause between 5 and 30 percent of the greenhouse effect, with water vapor responsible for the majority of the rest. Since we have a model now, let’s allow water vapor to become a greenhouse gas and not add any CO2 or methane. Since the atmosphere depressed the temperature by 20.5 degrees, let’s say that water vapor interaction with OLR returns it to 275K degrees. We can assign the rest, 13C to non-condensable greenhouse gases. We still have the full atmospheric effect including Greenhouse gas effect of 33.5 degrees, Slightly more than the classic 33C published in the literature with 390-238=152Wm-2 the total amount of the additional flux created by the greenhouse effect. Dividing that 152/33.5 yields 4.53Wm-2 per degree of warming.
The measured emissivity of the atmosphere from the very top of the atmosphere (TOA) is 0.61, meaning 61% of the surface radiation calculated at 390Wm-2 for a perfect black body is measured in space by satellites. 61 percent of 390Wm-2 is 238Wm-2. So these values correspond to both the conditions at the surface and at the TOA. Note: 0.64 is a commonly used value for TOA emissivity. Then the values used for solar in and OLR also vary.
4.53W/m-2K-1 can be re-written as 1/(4.53W.m-2K-1) or 0.22Km^2/W for convenience. This is sometimes called the Planck response or Planck parameter. This does not include any adjustments and assumes perfect blackbody relationships between temperature and radiant energy flux.
Since we are assigning water vapor two thirds of this value and CO2, other greenhouse gases the remaining third, this can be re-organized as Pr=0.67Wv+0.33CO2=0.22Km^2W-1.
The doubling of CO2 will produce an increase in forcing or resistant to OLR based on the Arrhenius equation equal to Delta F=Alpha ln(Cf/Co) = Alpha * 0.69. The constant alpha is based on the Stefan-Boltzmann relationship for black body radiation. Since I am using the perfect black body approximation I will use 5.67 for alpha. Generally, 5.67 times the 0.926 or ~ 5.25 is used for alpha. Since CO2 is assigned 33% of the greenhouse forcing or .33*152=50.16, a doubling of CO2 would increase that to 55.8 which with 4.53Wm-2 per degree would be 55.8/4.53 = 12.3 degrees versus 50.16/4.53=11.07 or 1.23 degrees of warming at the surface.
The only reason I did all this is to show that the S-B relationship for both water vapor and non-condensable greenhouse gases is needed when determining the Planck response and in determining the impact of double CO2. The Earth is of course not a perfect black body.
With the equation for Pr and the initial value of 0.22, we should be able to determine the range of error possible in the estimate of 33% impact of non-condensable GHGs. If the non-condensable GHG are responsible for 50% of the Greenhouse effect, the small increase in forcing due to doubling would have a smaller impact. If the ncGHGs are responsible for only 5% of the greenhouse effect, they would have a larger impact. Since the lowest estimate is about 5%, then 95% of 0.22 or 0.21 is due to water vapor and only 0.01 is due to ncGHGs, quadrupling that value or eliminating that value has little change on the impact of a doubling of CO2 as measured from the surface using perfect black body approximations.
Why? Because the impact of CO2 is greatest where is does not have to compete with water vapor as a greenhouse gas.
This is why the K&T over estimation of DWLR is so humorous. If a doubling of CO2 is going to increase DWLR 3.7Wm-2 estimated by the IPCC, the increase would be 3.7/330 or 1.1%. If we use the entire 33.5 degrees 1.1% of 33.5 is 0.38 degrees. After allowing for estimates of emissivity, Dr. Richard Lindzen determined that per the K&T data that doubling of CO2 would cause 0.5 degrees of warming. Dr. K. Kimoto estimated 0.5 degrees warming. Anyone that wants to will estimate lower than expected warming can use the K&T literature, because it is wrong. I find that absolutely hilarious. It is amazing how many contortions the defenders of the cartoon will go to “Prove” it is right when it implicitly states there is little warming possible due to a doubling of CO2.
The truth is that a doubling of CO2 will cause more warming than indicated in the K&T cartoons. How much though is a little more complicated. To do that you have to start with an accurate model that is a little more involved than the cartoons.
Why 5.67?
5.67e-8 is the main constant in the Stefan-Bolzmann equation. I say main, because emissivity has to be considered and the average for space, 0.926 J/K varies as, well, the energy per degree with respect to the radiant properties of the source. One of the things we need to know is the combined emissivity of the various sources and the transmittance of the media between them.
For a CO2 doubling we can assume an object in the atmosphere is a S-B energy source. If it emits energy F at T1, then its difference in energy emitted at T2 would be,
dF/dT=[ 5.67e-8(T2)^4 – 5.67e-8(T1)^4]/(T2-T1),
=5.67e-8(T2^4-T1^4)/(T2-T1)
Instead finishing a formal derivation, let’s just select a reasonable T2=255K and T1=254K then,
5.67e-8(255^4-254^4)/(255-254)=5.67e-8(42.3e8-41.6e8)/1=5.67*(0.7)/1, note that the e-8 and the e+8 can be added which simplifies the result to 5.67*0.7 equals 3.969= dF/dT @ dT254.
Using the same relationship, dF/dT@288K=5.44
So it should be obvious that the change in flux F is dependent on the temperature of the body and the emissivity of the body. For a point source of CO2 forceing, if the source temperature is ~255K, 4 is a good approximation of the change in flux with respect to a small change temperature and for a point source at 288K 5.44 is a good approximation for the change in flux with respect to change in temperature.
You can simply graph this relationship on appropreate paper, interpolate between the ranges or plug the whole shebang into your computer if that blows wind up your skirt.
The main point is that the surface to TOA relationship is 5.44/4 or 1.35 and for a TOA source with respect to the surface 4/5.44 or 0.74.
In our next installment, we will discuss the impact of the Tropopause temerature variations from ~-50C to -100C on what should be a simple calculation.
The 330 Watt Weirdness
Climate Sensitivity is defined as the amount of warming that the Earth would experience with a doubling of the amount of CO2 in the atmosphere. Nothing we can do about that, it is the definition.
The Greenhouse gas effect would logically be the amount of warming due to greenhouse gases in the atmosphere. In order to determine the defined Climate Sensitivity, we need to know how much of the atmospheric effect is due to greenhouse gases and how much of that is due to CO2.
The Earth is estimated to be 33 degrees C warmer than it would be without an atmosphere. Is that the greenhouse effect? I don’t think so. Part of that 33C is due to radiant interaction of outgoing long wave radiation and some is not.
In determining that the Earth without greenhouse gases would be 33C colder, it was assumed that 30 percent of the incoming sunlight was reflected, just as thirty percent is reflected on average today. Of that 30% most is due to clouds, snow and ice, all due to water in its phases and more is reflected by liquid water on the surface of the Earth. Clouds, Ice and Snow are part of the atmospheric effect that may be impacted by a change in CO2, but are not directly caused by CO2. The assumption goes that their reflective effects are fully considered, that they cause the colder temperatures that CO2, Water Vapor and other radiant absorptive gases compensate for in warming the Earth to its current average of 288K degrees.
Without this reflective property of water, the Earth would absorb approximately 96% of the 340Wm-2 average energy we receive from the sun on an annual basis. 326Wm-2 absorbed in a stated state would result in a temperature of approximately, 275 K degrees. Using the same ratio of surface and atmospheric albedo 26% reflected by clouds and 4% reflected from the surface as today, we have 238Wm-2 absorbed by the surface and at steady state, that would result in an average surface temperature of 254.5K degrees, via S-B emissivity =1. Clouds and atmosphere depress the surface temperature by 20.5 degrees. Without greenhouse gases, there would be an atmosphere and clouds are required for approximately 87 percent of the reflected sunlight.
Why would there be an atmosphere? Because the Earth is not at absolute zero and there is a wealth of nitrogen and oxygen available with thermal conductive and convective properties. Not including this simple physical reality makes it impossible to determine how much impact CO2 has on surface temperature. By definition, that is what is required to be determined. And by definition it has to be determined at the surface.
Co2 is estimated to cause between 5 and 30 percent of the greenhouse effect, with water vapor responsible for the majority of the rest. Since we have a model now, let’s allow water vapor to become a greenhouse gas and not add any CO2 or methane. Since the atmosphere depressed the temperature by 20.5 degrees, let’s say that water vapor interaction with OLR returns it to 275K degrees. We can assign the rest, 13C to non-condensable greenhouse gases. We still have the full atmospheric effect including Greenhouse gas effect of 33.5 degrees, Slightly more than the classic 33C published in the literature with 390-238=152Wm-2 the total amount of the additional flux created by the greenhouse effect. Dividing that 152/33.5 yields 4.53Wm-2 per degree of warming.
The measured emissivity of the atmosphere from the very top of the atmosphere (TOA) is 0.61, meaning 61% of the surface radiation calculated at 390Wm-2 for a perfect black body is measured in space by satellites. 61 percent of 390Wm-2 is 238Wm-2. So these values correspond to both the conditions at the surface and at the TOA. Note: 0.64 is a commonly used value for TOA emissivity. Then the values used for solar in and OLR also vary.
4.53W/m-2K-1 can be re-written as 1/(4.53W.m-2K-1) or 0.22Km^2/W for convenience. This is sometimes called the Planck response or Planck parameter. This does not include any adjustments and assumes perfect blackbody relationships between temperature and radiant energy flux.
Since we are assigning water vapor two thirds of this value and CO2, other greenhouse gases the remaining third, this can be re-organized as Pr=0.67Wv+0.33CO2=0.22Km^2W-1.
The doubling of CO2 will produce an increase in forcing or resistant to OLR based on the Arrhenius equation equal to Delta F=Alpha ln(Cf/Co) = Alpha * 0.69. The constant alpha is based on the Stefan-Boltzmann relationship for black body radiation. Since I am using the perfect black body approximation I will use 5.67 for alpha. Generally, 5.67 times the 0.926 or ~ 5.25 is used for alpha. Since CO2 is assigned 33% of the greenhouse forcing or .33*152=50.16, a doubling of CO2 would increase that to 55.8 which with 4.53Wm-2 per degree would be 55.8/4.53 = 12.3 degrees versus 50.16/4.53=11.07 or 1.23 degrees of warming at the surface.
The only reason I did all this is to show that the S-B relationship for both water vapor and non-condensable greenhouse gases is needed when determining the Planck response and in determining the impact of double CO2. The Earth is of course not a perfect black body.
With the equation for Pr and the initial value of 0.22, we should be able to determine the range of error possible in the estimate of 33% impact of non-condensable GHGs. If the non-condensable GHG are responsible for 50% of the Greenhouse effect, the small increase in forcing due to doubling would have a smaller impact. If the ncGHGs are responsible for only 5% of the greenhouse effect, they would have a larger impact. Since the lowest estimate is about 5%, then 95% of 0.22 or 0.21 is due to water vapor and only 0.01 is due to ncGHGs, quadrupling that value or eliminating that value has little change on the impact of a doubling of CO2 as measured from the surface using perfect black body approximations.
Why? Because the impact of CO2 is greatest where is does not have to compete with water vapor as a greenhouse gas.
This is why the K&T over estimation of DWLR is so humorous. If a doubling of CO2 is going to increase DWLR 3.7Wm-2 estimated by the IPCC, the increase would be 3.7/330 or 1.1%. If we use the entire 33.5 degrees 1.1% of 33.5 is 0.38 degrees. After allowing for estimates of emissivity, Dr. Richard Lindzen determined that per the K&T data that doubling of CO2 would cause 0.5 degrees of warming. Dr. K. Kimoto estimated 0.5 degrees warming. Anyone that wants to will estimate lower than expected warming can use the K&T literature, because it is wrong. I find that absolutely hilarious. It is amazing how many contortions the defenders of the cartoon will go to “Prove” it is right when it implicitly states there is little warming possible due to a doubling of CO2.
The truth is that a doubling of CO2 will cause more warming than indicated in the K&T cartoons. How much though is a little more complicated. To do that you have to start with an accurate model that is a little more involved than the cartoons.
Why 5.67?
5.67e-8 is the main constant in the Stefan-Bolzmann equation. I say main, because emissivity has to be considered and the average for space, 0.926 J/K varies as, well, the energy per degree with respect to the radiant properties of the source. One of the things we need to know is the combined emissivity of the various sources and the transmittance of the media between them.
For a CO2 doubling we can assume an object in the atmosphere is a S-B energy source. If it emits energy F at T1, then its difference in energy emitted at T2 would be,
dF/dT=[ 5.67e-8(T2)^4 – 5.67e-8(T1)^4]/(T2-T1),
=5.67e-8(T2^4-T1^4)/(T2-T1)
Instead finishing a formal derivation, let’s just select a reasonable T2=255K and T1=254K then,
5.67e-8(255^4-254^4)/(255-254)=5.67e-8(42.3e8-41.6e8)/1=5.67*(0.7)/1, note that the e-8 and the e+8 can be added which simplifies the result to 5.67*0.7 equals 3.969= dF/dT @ dT254.
Using the same relationship, dF/dT@288K=5.44
So it should be obvious that the change in flux F is dependent on the temperature of the body and the emissivity of the body. For a point source of CO2 forceing, if the source temperature is ~255K, 4 is a good approximation of the change in flux with respect to a small change temperature and for a point source at 288K 5.44 is a good approximation for the change in flux with respect to change in temperature.
You can simply graph this relationship on appropreate paper, interpolate between the ranges or plug the whole shebang into your computer if that blows wind up your skirt.
The main point is that the surface to TOA relationship is 5.44/4 or 1.35 and for a TOA source with respect to the surface 4/5.44 or 0.74.
In our next installment, we will discuss the impact of the Tropopause temerature variations from ~-50C to -100C on what should be a simple calculation.
Thursday, October 27, 2011
Do the Photon Experiment at Home
That's right! Amaze you friends with the do it yourself home photon experiment.
What you need; a small bead, string, plastic glass and a bunch of other colored beads of just dirt if you are cheap.
Tie the bead to the string. Place the bead in the bottom of the glass, the fill the glass with loose beads or dirt for you thrifty folks. Now slowly pull the bead up through the dirt or beads.
Now drill a small hole in the plastic glass, put the end of the string through the hole leaving the bead attached to the string outside of the glass. Carefully, fill the glass with loose beads or dirt. Now pull the bead back down through the glass to the bottom using the string running through the hole.
Which way was easier? Did it get easier or harder as you pulled? Why did the string break when you pulled too hard to bring the bead back down to the bottom?
Now you have an idea what a photon sees from the surface looking up and from up looking to the surface. If you actually spring for the real beads, use a larger bucket instead of a glass and run the string through the bead so it comes out of the bottom and the top, you can measure the differences in force required. Plot the force required at different depths in each direction. You are now an official Redneck Science student.
Fun?
Now locate the bead with the string on top and add a weight to the string out the bottom. Vibrate the bucket and watch the bead sink. The faster the vibrations, the faster it sinks.
What you need; a small bead, string, plastic glass and a bunch of other colored beads of just dirt if you are cheap.
Tie the bead to the string. Place the bead in the bottom of the glass, the fill the glass with loose beads or dirt for you thrifty folks. Now slowly pull the bead up through the dirt or beads.
Now drill a small hole in the plastic glass, put the end of the string through the hole leaving the bead attached to the string outside of the glass. Carefully, fill the glass with loose beads or dirt. Now pull the bead back down through the glass to the bottom using the string running through the hole.
Which way was easier? Did it get easier or harder as you pulled? Why did the string break when you pulled too hard to bring the bead back down to the bottom?
Now you have an idea what a photon sees from the surface looking up and from up looking to the surface. If you actually spring for the real beads, use a larger bucket instead of a glass and run the string through the bead so it comes out of the bottom and the top, you can measure the differences in force required. Plot the force required at different depths in each direction. You are now an official Redneck Science student.
Fun?
Now locate the bead with the string on top and add a weight to the string out the bottom. Vibrate the bucket and watch the bead sink. The faster the vibrations, the faster it sinks.
Wednesday, October 26, 2011
How's That Choice of a Temperature Inversion as a Frame of Reference working for You?
Dr. Kimoto is paying the Black Board a visit. I am of course not allowed to plays, though, Lucia did give me a token crackpot thread to play in :)
Dr. Kimoto is having the same issues as I with people basically asking why he doesn't first start his explaination by disproving perpetual motion. Instead of listening and using a reasonable frame of reference, the gang is still in metaphysicial world wondering why they have trouble understanding.
With great personal scarifice, I spend full minutes constructing this high quality graphic of the individual energy fluxes from the surface to illustrate the atmospheric effect and its counter balancing energies.
After pausing to catch my breath, I modified the first drawing to should the combined atmospheric effect and counter balancing energy. It was draining and something I do not recommend you try at home without adult supervision or refreshments.
By asswuming that an undetermined varying value is a "proper" thermodynamic frame of reference, climate science has entered bizzarro world. Stay tuned.
Dr. Kimoto is having the same issues as I with people basically asking why he doesn't first start his explaination by disproving perpetual motion. Instead of listening and using a reasonable frame of reference, the gang is still in metaphysicial world wondering why they have trouble understanding.
With great personal scarifice, I spend full minutes constructing this high quality graphic of the individual energy fluxes from the surface to illustrate the atmospheric effect and its counter balancing energies.
After pausing to catch my breath, I modified the first drawing to should the combined atmospheric effect and counter balancing energy. It was draining and something I do not recommend you try at home without adult supervision or refreshments.
By asswuming that an undetermined varying value is a "proper" thermodynamic frame of reference, climate science has entered bizzarro world. Stay tuned.
Tuesday, October 25, 2011
What the Heck Is Downwelling Long Wave Radiation?
First, if I ruled the world, it would not be called Downwelling longwave radiation. Atmospheric potential energy is a much more apt term. Life is what it is though, so here goes.
Anything that contains heat emits radiant energy. Since just about everything is above absolute zero, even space, everything emits radiant energy. How energetic and at what wavelength that radiant energy is depends on the temperature of the object and the fundamentals of black body radiation.
So in our atmosphere where is this down welling infrared radiation? It can be anywhere. Pick a spot and a number and we can work it out. For it to be a meaningful number, the DWLR has to have a source and an intensity. As and energy flow it has to obey the laws of thermodynamics.
The atmosphere as a whole has a temperature greater than absolute zero and contains energy. The energy it absorbs from the surface is the origin of the DWLR and that energy is from the sun.
If we model the DWLR as solely from the surface, then its value is only the value the atmopshere absorbs that originated at the surface. If we model DWLR as the average energy of the atmosphere, the the energy absorbed from the surface and the energy absorbed directly by the atmosphere from the sun is considered the source of the DWLR. The simplest is the energy of the atmosphere.
The point source of the DWLR or energy of the atmosphere would be the point in the atmosphere where the total energy of the atmosphere can be approximated as point or layer of the atmosphere. Since the energy into the Earth/atnmosphere is less than the total energy from the surface and the atmosphere, the difference of the energy from the surface and the energy out at the top of all the atmosphere is by definition the atmospheric effect. That amount of energy is then the maximum value of the DWLR after allowing for the efficiency of energy flow from its source to the surface, which would be the same location as the average accumlation of the energy in the atmosphere by the outgoing surface and atmospheric energy flux.
That is a little complicated, but if DWLR is real, it must be properly defined.
If the only source of the DWLR was the surface, then the DWLR would be located where the average of the energy absorbed from the surface could be approximated. Since the solar absorbed is include in the atmosphreric energy, the the average location of the absorbed energy would be higher than if only the surface was considered.
If we intergrated the energy of the atmosphere from the surface to the TOA, we would find that the average energy content of the atmosphere is close to the surface were the atmospheric pressure produces the higher air density.
Without energy exchanged by radiation, the average energy of the atmosphere would be at the altitude where the dry adiabatic lapse rate resulted in a temperature equal to the temperture of the Earth's surface without an atmospheric effect. Since the Earth is supposed to be 33 degrees C warmer than it would be without an atmospheric effect, the point of no effective DWLR would be 33c colder than the surface.
There is energy exchanged between the outgoing surface flux, and incoming solar energy in the atmosphere. The amount changes with latitude, time of day and season. So only and average value has any physical meaning if we are to determine what impact the DWLR has on surface temperature.
So what makes sense has to be considered when selecting an equivalent point source and intensity of DWLR.
The ideal intensity is simple, with and average surface flux of 390Wm-2 and and Average TOA flux of 240, the ideal value of surface generated DWLR would be 160Wm-2. Since the sun adds energy directly to the atmosphere, approximately 60 Wm-2 on average, the approximate value of the intensity of the DWLR would be 220 Wm-2. Trying to determine the exact value is a little more complicated.
Theoretically, if the DWLR were a point source of energy, its flow would not be ideal. It would require more energy at a distance to produce an equivalent effect. The difference between the surface generated DWLR portion, 160Wm-2 and the total DWLR 220Wm-2 accounts for the loss of energy during transfer from the surface to the point source of the DWLR.
This is theoretical so it has to make sense to be accepted. This value of 220Wm-2 in the atmosphere corresponds to an effective temperature of 249.6 degrees K or -23.4 C. At the point in the atmosphere where the heat of compression would cause a parcel of air at that temperature to rise to 288K, the average surface temperature, that would be the source of the DWLR.
The altitude of this point varies with the surface temperature and local atmospheric conditions. So it is an effective reference value to determine if energy is being added to the atmosphere or transferred from the atmosphere to the surface.
Most of the energy tranferred to the surface is in the form of pressure. Warm air cooled by convection in the lapse rate decends to the surface in another location either adding to or reducing the surface temperature. Near the poles, the average effect is warmer surface temperatures and near the equator the average effect is cooler surface temperatures. The rising and falling air adds energy to the atmosphere in the form of pressure differentials creating atmospheric circulation patterns that distribute heat gained in the atmosphere to other locations.
Rarely does the DWLR have any direct effect on the surface. Due to water in the atmosphere that has a radiant spectrum that is not totally blocked by greenhouse gases, some DWLR in the atmospheric window does impact surface temperature directly. If it were not for this rather small amount of the total energy in the atmosphere being radiant and impacting the surface, the term Down Welling Longwave Radiation would not exist. It would be replaced with barametric pressure.
Downwelling Long wave does physically exist and it has some impact on the surface. It also has impact on water and water vapor in the atmosphere. DWLR warms water in all its phases and being warmed, it reduces local atmospheric density creating convection. This convection cools locally and adds to the heat transfer through pressure of the up welling thermal and down welling cooler denser air after adiabatic cooling. The net result is pressure and temperature migration to the polar regions.
When the water vapor and to a lesser extent greenhouse gases closer to the surface are warmed by DWLR, the an increase in surface temperature occurs. The renews the convection cycle transferring more energy.
So DWLR is the potential energy of the atmosphere in terms of radiant energy that has to obey the laws of physics. Most of the DWLR is converted locally to convective and conductive energy flux, some to latent energy flux and most of the energy is converted into changes in atmospheric pressure which tends to spread the equatorial warmth poleward.
So defined, DWLR is a useful tool for studying the physics of the atmosphere. Any other definition only leads to confusion.
Anything that contains heat emits radiant energy. Since just about everything is above absolute zero, even space, everything emits radiant energy. How energetic and at what wavelength that radiant energy is depends on the temperature of the object and the fundamentals of black body radiation.
So in our atmosphere where is this down welling infrared radiation? It can be anywhere. Pick a spot and a number and we can work it out. For it to be a meaningful number, the DWLR has to have a source and an intensity. As and energy flow it has to obey the laws of thermodynamics.
The atmosphere as a whole has a temperature greater than absolute zero and contains energy. The energy it absorbs from the surface is the origin of the DWLR and that energy is from the sun.
If we model the DWLR as solely from the surface, then its value is only the value the atmopshere absorbs that originated at the surface. If we model DWLR as the average energy of the atmosphere, the the energy absorbed from the surface and the energy absorbed directly by the atmosphere from the sun is considered the source of the DWLR. The simplest is the energy of the atmosphere.
The point source of the DWLR or energy of the atmosphere would be the point in the atmosphere where the total energy of the atmosphere can be approximated as point or layer of the atmosphere. Since the energy into the Earth/atnmosphere is less than the total energy from the surface and the atmosphere, the difference of the energy from the surface and the energy out at the top of all the atmosphere is by definition the atmospheric effect. That amount of energy is then the maximum value of the DWLR after allowing for the efficiency of energy flow from its source to the surface, which would be the same location as the average accumlation of the energy in the atmosphere by the outgoing surface and atmospheric energy flux.
That is a little complicated, but if DWLR is real, it must be properly defined.
If the only source of the DWLR was the surface, then the DWLR would be located where the average of the energy absorbed from the surface could be approximated. Since the solar absorbed is include in the atmosphreric energy, the the average location of the absorbed energy would be higher than if only the surface was considered.
If we intergrated the energy of the atmosphere from the surface to the TOA, we would find that the average energy content of the atmosphere is close to the surface were the atmospheric pressure produces the higher air density.
Without energy exchanged by radiation, the average energy of the atmosphere would be at the altitude where the dry adiabatic lapse rate resulted in a temperature equal to the temperture of the Earth's surface without an atmospheric effect. Since the Earth is supposed to be 33 degrees C warmer than it would be without an atmospheric effect, the point of no effective DWLR would be 33c colder than the surface.
There is energy exchanged between the outgoing surface flux, and incoming solar energy in the atmosphere. The amount changes with latitude, time of day and season. So only and average value has any physical meaning if we are to determine what impact the DWLR has on surface temperature.
So what makes sense has to be considered when selecting an equivalent point source and intensity of DWLR.
The ideal intensity is simple, with and average surface flux of 390Wm-2 and and Average TOA flux of 240, the ideal value of surface generated DWLR would be 160Wm-2. Since the sun adds energy directly to the atmosphere, approximately 60 Wm-2 on average, the approximate value of the intensity of the DWLR would be 220 Wm-2. Trying to determine the exact value is a little more complicated.
Theoretically, if the DWLR were a point source of energy, its flow would not be ideal. It would require more energy at a distance to produce an equivalent effect. The difference between the surface generated DWLR portion, 160Wm-2 and the total DWLR 220Wm-2 accounts for the loss of energy during transfer from the surface to the point source of the DWLR.
This is theoretical so it has to make sense to be accepted. This value of 220Wm-2 in the atmosphere corresponds to an effective temperature of 249.6 degrees K or -23.4 C. At the point in the atmosphere where the heat of compression would cause a parcel of air at that temperature to rise to 288K, the average surface temperature, that would be the source of the DWLR.
The altitude of this point varies with the surface temperature and local atmospheric conditions. So it is an effective reference value to determine if energy is being added to the atmosphere or transferred from the atmosphere to the surface.
Most of the energy tranferred to the surface is in the form of pressure. Warm air cooled by convection in the lapse rate decends to the surface in another location either adding to or reducing the surface temperature. Near the poles, the average effect is warmer surface temperatures and near the equator the average effect is cooler surface temperatures. The rising and falling air adds energy to the atmosphere in the form of pressure differentials creating atmospheric circulation patterns that distribute heat gained in the atmosphere to other locations.
Rarely does the DWLR have any direct effect on the surface. Due to water in the atmosphere that has a radiant spectrum that is not totally blocked by greenhouse gases, some DWLR in the atmospheric window does impact surface temperature directly. If it were not for this rather small amount of the total energy in the atmosphere being radiant and impacting the surface, the term Down Welling Longwave Radiation would not exist. It would be replaced with barametric pressure.
Downwelling Long wave does physically exist and it has some impact on the surface. It also has impact on water and water vapor in the atmosphere. DWLR warms water in all its phases and being warmed, it reduces local atmospheric density creating convection. This convection cools locally and adds to the heat transfer through pressure of the up welling thermal and down welling cooler denser air after adiabatic cooling. The net result is pressure and temperature migration to the polar regions.
When the water vapor and to a lesser extent greenhouse gases closer to the surface are warmed by DWLR, the an increase in surface temperature occurs. The renews the convection cycle transferring more energy.
So DWLR is the potential energy of the atmosphere in terms of radiant energy that has to obey the laws of physics. Most of the DWLR is converted locally to convective and conductive energy flux, some to latent energy flux and most of the energy is converted into changes in atmospheric pressure which tends to spread the equatorial warmth poleward.
So defined, DWLR is a useful tool for studying the physics of the atmosphere. Any other definition only leads to confusion.
Where Did Climate Science Go Wrong?
Where Did Climate Science Go Wrong?
Climate Science, specifically the Theory of Global Warming, is based on assumptions. If those assumptions are valid, the supporting work will be valid, if not?
The first assumption is that our world without greenhouse gases would be 33 degrees cooler. That assumption is based on the assumption that the Earth would reflect just as much sunlight as it does today. A valid assumption provided the surface absorbed all the sunlight or if the top of the atmosphere temperature is used consistently to determine warming. If the top of the atmosphere is used, then the assumption can be made that the atmosphere was so low, that the surface and atmosphere could be at the same temperature.
That’s what, three assumptions supporting one main assumption?
There is more to heat transfer than photons. If the Earth had nitrogen and oxygen in the same quantities as today, conduction and convection assure that the Earth would have an atmosphere even without greenhouse gases. With Oxygen in the atmosphere and ultraviolet energy emitted by the sun, the Earth would be assured of having a Tropopause, the temperature inversion in the lower atmosphere. That would mean the surface would be at least 2 degrees cooler than the top of the atmosphere if the Troposphere is assumed to be the top.
The Earth also has a wealth of water, water, temperature, equatorial sunlight and atmosphere ensure there would be latent cooling as water evaporated and is carried aloft by convection. That could produce nearly 10 degrees cooler temperatures at the surface than at the top of the atmosphere. That could result in a 30% error in the Greenhouse Effect portion of the overall atmospheric effect.
Assumption two, the greenhouse effect will uniformly warm the surface with most impact at the poles and upper troposphere. Unfortunately, this is also an incorrect assumption.
In the tropics, additional Carbon Dioxide has minimal effect on the surface due to the high concentration of water vapor. The Carbon and water vapor radiant effects are more strongly felt above this water vapor where its downward impact warms the water and clouds below it. This increases upper level convection promoting cooling which partially offsets the warming potential of the Greenhouse Effect. In the Northern polar region, were the air is less saturated with water vapor, the additional Carbon Dioxide enhances warming as predicted. In the Southern polar region where temperatures are much lower, water vapor does not increased significantly with the radiant forcing of additional CO2 reducing the impact of the Greenhouse Effect.
Trying to match observations with these faulty assumptions has led Climate Science down a path of attempting to “Force” data to match expectations instead of matching theory to observations. Not sound scientific practice.
Is the Earth warming due to Anthropgenic Greenhouse Gases? Yes, but it will also cool because of them. The additional greenhouse gases amplify natural variability in some regions and not others. More, proper, research needs to be done.
redneckphysics.blogspot.com
Climate Science, specifically the Theory of Global Warming, is based on assumptions. If those assumptions are valid, the supporting work will be valid, if not?
The first assumption is that our world without greenhouse gases would be 33 degrees cooler. That assumption is based on the assumption that the Earth would reflect just as much sunlight as it does today. A valid assumption provided the surface absorbed all the sunlight or if the top of the atmosphere temperature is used consistently to determine warming. If the top of the atmosphere is used, then the assumption can be made that the atmosphere was so low, that the surface and atmosphere could be at the same temperature.
That’s what, three assumptions supporting one main assumption?
There is more to heat transfer than photons. If the Earth had nitrogen and oxygen in the same quantities as today, conduction and convection assure that the Earth would have an atmosphere even without greenhouse gases. With Oxygen in the atmosphere and ultraviolet energy emitted by the sun, the Earth would be assured of having a Tropopause, the temperature inversion in the lower atmosphere. That would mean the surface would be at least 2 degrees cooler than the top of the atmosphere if the Troposphere is assumed to be the top.
The Earth also has a wealth of water, water, temperature, equatorial sunlight and atmosphere ensure there would be latent cooling as water evaporated and is carried aloft by convection. That could produce nearly 10 degrees cooler temperatures at the surface than at the top of the atmosphere. That could result in a 30% error in the Greenhouse Effect portion of the overall atmospheric effect.
Assumption two, the greenhouse effect will uniformly warm the surface with most impact at the poles and upper troposphere. Unfortunately, this is also an incorrect assumption.
In the tropics, additional Carbon Dioxide has minimal effect on the surface due to the high concentration of water vapor. The Carbon and water vapor radiant effects are more strongly felt above this water vapor where its downward impact warms the water and clouds below it. This increases upper level convection promoting cooling which partially offsets the warming potential of the Greenhouse Effect. In the Northern polar region, were the air is less saturated with water vapor, the additional Carbon Dioxide enhances warming as predicted. In the Southern polar region where temperatures are much lower, water vapor does not increased significantly with the radiant forcing of additional CO2 reducing the impact of the Greenhouse Effect.
Trying to match observations with these faulty assumptions has led Climate Science down a path of attempting to “Force” data to match expectations instead of matching theory to observations. Not sound scientific practice.
Is the Earth warming due to Anthropgenic Greenhouse Gases? Yes, but it will also cool because of them. The additional greenhouse gases amplify natural variability in some regions and not others. More, proper, research needs to be done.
redneckphysics.blogspot.com
Monday, October 24, 2011
Oh! Antarctica!
For those of you into light reading,you may have noticed that the polar flux readings are wild and crazy, particularly in the Antarctic. I firmly believe that understanding what is going on way down under is key to figuring out what changes CO2 would have on the Atmospheric effect. The numbers available, well, they ask more questions than they answer.
Simplifying the down under, looks to be possible with considering the changes in thermal conductivity and Kenimatic viscosity in the coldest of climates. Basically, the conductive flux in the Antarctic is nearly twice the global average and CO2 has a small but apparently significant impact on the thermal conductivity. Very interesting. This tends to indicate that flux changes in and above the tropopause have much greater impacts on surface cooling in the Antartic region. That is nothing new, only that the conductive change still appears to be the main known unknown and that the Kimoto Equation using surface and mid tropospheric temperatures may provide a simplified method of analyzing the impacts which appear to be becoming more obvious each day. For those playing with the Equation, ~0.61Fc and ~0.71Fr are the new Antarctic approximations with Fl being unknown but much smalerl because of the cold dry environment. Using -24.7 C and 600mb as the tropospheric reference to determine change in atmospheric effect.
Since the Thermal conductivity and kenimatic viscosity can be approximated with known surface temperature and pressure, the approximations for the equation should be fairly close. It will still require some tweaking, but it should be possible to estimate surface flux values with reasonable accuracy.
Basic Reasoning: Thermal Flux Boundaries
Since the Kimoto Equation is a touch novel, somewhat novel approaches may be required to estimate the significance of physical properties that impact the flow of thermal energy. Boundary layers are well known and mainly applied to conductive and convective heat transfer with radiant a bit of an afterthought or considered separately
.
Looking at the Antarctic data available, the thermal conductivity and convectivity need to be considered separate to the Latent flux, i.e.,Fc proportional to F(I) plus F(v) for (i) conductive and (v) convective or viscous flux. This allows the application to be expanded to denser gases and liquids. The latent associated phase change also includes a sensible component, Fl = Fl(l) plus Fl(s). Then if the atmosphere being analyzed does not include significant phase change, The Fl term can be approximated as zero.
This is beneficial in the Antarctic where water phase change at the surface is minimal. However, phase change of both water and carbon dioxide cannot be ruled out in higher layers where microscopic scale changes are possible with extreme low temperatures and photon interaction and at extremely high pressures, near constant temperatures and densities in the deep oceans, the 4C boundary for example.
The equation should be adaptable to all layers and boundary conditions.
This allows some minor trickery to be used. By considering a parallel circuit so to speak, only one flux at a time can be used to determine an approximate maximum value for that portion of the total flux with local conditions. In the Antarctic, using the simplified form, aFc+bFl+eFr with surface temperature ~224 (-49C) Maximum values for a and e can be determined assuming b is small. Then the approximate values of a and e can be compared to the properties of air at local temperature and pressure to a standard reference, -24.7C @ 600mb, the global average potential temperature and pressure of the atmospheric effect.
As noted previously, this standard reference is subject to change by +/- 4C approximately due to the uncertainty of the Virgin Earth surface temperature and Tropopause altitude. That adds to the complexity of the solution, but is not insurmountable.
Simplifying the down under, looks to be possible with considering the changes in thermal conductivity and Kenimatic viscosity in the coldest of climates. Basically, the conductive flux in the Antarctic is nearly twice the global average and CO2 has a small but apparently significant impact on the thermal conductivity. Very interesting. This tends to indicate that flux changes in and above the tropopause have much greater impacts on surface cooling in the Antartic region. That is nothing new, only that the conductive change still appears to be the main known unknown and that the Kimoto Equation using surface and mid tropospheric temperatures may provide a simplified method of analyzing the impacts which appear to be becoming more obvious each day. For those playing with the Equation, ~0.61Fc and ~0.71Fr are the new Antarctic approximations with Fl being unknown but much smalerl because of the cold dry environment. Using -24.7 C and 600mb as the tropospheric reference to determine change in atmospheric effect.
Since the Thermal conductivity and kenimatic viscosity can be approximated with known surface temperature and pressure, the approximations for the equation should be fairly close. It will still require some tweaking, but it should be possible to estimate surface flux values with reasonable accuracy.
Basic Reasoning: Thermal Flux Boundaries
Since the Kimoto Equation is a touch novel, somewhat novel approaches may be required to estimate the significance of physical properties that impact the flow of thermal energy. Boundary layers are well known and mainly applied to conductive and convective heat transfer with radiant a bit of an afterthought or considered separately
.
Looking at the Antarctic data available, the thermal conductivity and convectivity need to be considered separate to the Latent flux, i.e.,Fc proportional to F(I) plus F(v) for (i) conductive and (v) convective or viscous flux. This allows the application to be expanded to denser gases and liquids. The latent associated phase change also includes a sensible component, Fl = Fl(l) plus Fl(s). Then if the atmosphere being analyzed does not include significant phase change, The Fl term can be approximated as zero.
This is beneficial in the Antarctic where water phase change at the surface is minimal. However, phase change of both water and carbon dioxide cannot be ruled out in higher layers where microscopic scale changes are possible with extreme low temperatures and photon interaction and at extremely high pressures, near constant temperatures and densities in the deep oceans, the 4C boundary for example.
The equation should be adaptable to all layers and boundary conditions.
This allows some minor trickery to be used. By considering a parallel circuit so to speak, only one flux at a time can be used to determine an approximate maximum value for that portion of the total flux with local conditions. In the Antarctic, using the simplified form, aFc+bFl+eFr with surface temperature ~224 (-49C) Maximum values for a and e can be determined assuming b is small. Then the approximate values of a and e can be compared to the properties of air at local temperature and pressure to a standard reference, -24.7C @ 600mb, the global average potential temperature and pressure of the atmospheric effect.
As noted previously, this standard reference is subject to change by +/- 4C approximately due to the uncertainty of the Virgin Earth surface temperature and Tropopause altitude. That adds to the complexity of the solution, but is not insurmountable.
Simple Versus Too Simple
Making the complex simple to understand is the goal of science, any discipline really. That goal often requires compromises where one portion of the overall concept is attempted to be explained by analogy to a commonly understood concept.
Physics uses many basic analogies, Carnot Engines, Equilibrium and adiabatic processes, as foundations even though none may ever exist. They are convenient models of perfection for comparison.
In atmospheric physics, the dry adiabatic lapse rate, where temperature changes with pressure with no gain or loss to the system, is an example of an equilibrium state with prefect energy transfer, a Carnot engine. Perfection does not exist in nature, it can only be approached.
The dry adiabatic lapse rate in Earth’s atmosphere is the combination of the surface temperature, the composition of the gases in the atmosphere, the molecular weight of the gases, the thermal properties of the gases, the gravitational constant and radiant energy interaction with the changing density and composition of gases compressed by gravity. A rather complicated process we on the surface take for granted.
If you are in favor of electrical analogies, the adiabatic lapse rate is an inductive load with a steady state current. Small changes in current are dampen by properties of the inductor and rapid change produces huge changes in the potential energy or electromotive force realized across the inductive load.
The electromotive force is provided not by a single source, but several, a conductive battery, a latent battery, a gravitational battery and a radiant battery are the more significant power sources.
The radiant battery is both solar and black body, with cells poorly designed for the task, but adequate in steady sate conditions. In steady state, the potential can be determined at different points in the atmospheric circuitry and the total accurately calculated from one connection to the next. i.e. if we know the voltage and current into a black box and the current and voltage out of that black box, we can determine to a point what circuitry is in the box. With more than one condition, we can better describe the inner circuitry.
The currents are in parallel from the electromotive sources at the surface, Fc, Fl, Fr, and F?, for conductive, latent, radiant and the question mark is ever present uncertainty. Each of the batteries providing these currents or fluxes, have cells, Fra, Frb, Frc …Frn, for example. The subscript letters can be individual wavelengths, associated energies, or combinations of wavelengths and energies that impact portions of the atmosphere.
This is the simplicity of the Kimoto equation, dF/dT=4(aFc+bFl+cFr+…F?)/T, which is derived from Stefan’s equation, Fi/Fo=alpha(Ti)^4/alpha(To)^4, or the change in energy flux of a body is proportional to the change in temperature of the body at initial temperature T. All the coefficients, a,b,..n, represent changes to the flux through the atmospheric inductor or impedance.
Proper use of this simple equation requires, proper consideration of the flux values and ever present uncertainty.
Physics uses many basic analogies, Carnot Engines, Equilibrium and adiabatic processes, as foundations even though none may ever exist. They are convenient models of perfection for comparison.
In atmospheric physics, the dry adiabatic lapse rate, where temperature changes with pressure with no gain or loss to the system, is an example of an equilibrium state with prefect energy transfer, a Carnot engine. Perfection does not exist in nature, it can only be approached.
The dry adiabatic lapse rate in Earth’s atmosphere is the combination of the surface temperature, the composition of the gases in the atmosphere, the molecular weight of the gases, the thermal properties of the gases, the gravitational constant and radiant energy interaction with the changing density and composition of gases compressed by gravity. A rather complicated process we on the surface take for granted.
If you are in favor of electrical analogies, the adiabatic lapse rate is an inductive load with a steady state current. Small changes in current are dampen by properties of the inductor and rapid change produces huge changes in the potential energy or electromotive force realized across the inductive load.
The electromotive force is provided not by a single source, but several, a conductive battery, a latent battery, a gravitational battery and a radiant battery are the more significant power sources.
The radiant battery is both solar and black body, with cells poorly designed for the task, but adequate in steady sate conditions. In steady state, the potential can be determined at different points in the atmospheric circuitry and the total accurately calculated from one connection to the next. i.e. if we know the voltage and current into a black box and the current and voltage out of that black box, we can determine to a point what circuitry is in the box. With more than one condition, we can better describe the inner circuitry.
The currents are in parallel from the electromotive sources at the surface, Fc, Fl, Fr, and F?, for conductive, latent, radiant and the question mark is ever present uncertainty. Each of the batteries providing these currents or fluxes, have cells, Fra, Frb, Frc …Frn, for example. The subscript letters can be individual wavelengths, associated energies, or combinations of wavelengths and energies that impact portions of the atmosphere.
This is the simplicity of the Kimoto equation, dF/dT=4(aFc+bFl+cFr+…F?)/T, which is derived from Stefan’s equation, Fi/Fo=alpha(Ti)^4/alpha(To)^4, or the change in energy flux of a body is proportional to the change in temperature of the body at initial temperature T. All the coefficients, a,b,..n, represent changes to the flux through the atmospheric inductor or impedance.
Proper use of this simple equation requires, proper consideration of the flux values and ever present uncertainty.
Sunday, October 23, 2011
It Can't Be That Simple
I am working on a little more detailed post, but have to remind a few people on other blogs that it is not that simple. Some think its just the unsuspected conductive impact, Some just the sun and others that the mainstream scientists just screwed up. It is not just one thing and it is not just that simple.
The appearent conductive impact in the Antarctic is a part of the puzzle and an unexpected part certainly. The impact of the ends of the solar spectrum change is not as unexpected, with the new information on UV, but the near infrared is just as important. It is the change in the solar atmsphere and surface absorption ratio, not just the change in one or the other. What would be a minor impact is amplified, but not enough on its own to be the driver of climate change beyond roughly 0.15 degrees. It is not just the slightly greater than expected non-uniformity in CO2 mixing, but that plays a role as well. Synchronization with natural variability also is involved.
So there is not much simple about the system, just interesting larger than expected variations in known issues that still require a boost, just not as much.
While I am working on organizing things, the post, "Therefore Water Absorbs Almost No Sunlight" may be intersting as far as the solar UV and shorter short wave is concerned. That is only a small portion of the total solar absorbed but in a very important layer, below the 30 meter thermosline and to a point below the 100 meter thermocline. Most of that post was based on Woods Hole research.
Putting things together starts with, Another Shot at Explaining the Atmospheric Effect. Most of the diffence I am getting with the sensitivity to CO2 is due to the initial eatimate of the total radiant portion of the atmospheric effect being over estimate by 1 to 2 degrees with the initial albedo assumption of 30% at the surface. That appears to not have been a very good assumption.
For the insomniacs in the crowd, The Energy Budget of the Polar Atmosphere in MERRA is a nice light read. There are pretty significant descrepancies that more than cover the conductive issue I am trying to quantify. The devil is in the details, but adapting the equation should shed some light on the issue. Poor satellite coverage is not helping at all. There is a significant lead in surface change over down welling change though that is interesting.
The appearent conductive impact in the Antarctic is a part of the puzzle and an unexpected part certainly. The impact of the ends of the solar spectrum change is not as unexpected, with the new information on UV, but the near infrared is just as important. It is the change in the solar atmsphere and surface absorption ratio, not just the change in one or the other. What would be a minor impact is amplified, but not enough on its own to be the driver of climate change beyond roughly 0.15 degrees. It is not just the slightly greater than expected non-uniformity in CO2 mixing, but that plays a role as well. Synchronization with natural variability also is involved.
So there is not much simple about the system, just interesting larger than expected variations in known issues that still require a boost, just not as much.
While I am working on organizing things, the post, "Therefore Water Absorbs Almost No Sunlight" may be intersting as far as the solar UV and shorter short wave is concerned. That is only a small portion of the total solar absorbed but in a very important layer, below the 30 meter thermosline and to a point below the 100 meter thermocline. Most of that post was based on Woods Hole research.
Putting things together starts with, Another Shot at Explaining the Atmospheric Effect. Most of the diffence I am getting with the sensitivity to CO2 is due to the initial eatimate of the total radiant portion of the atmospheric effect being over estimate by 1 to 2 degrees with the initial albedo assumption of 30% at the surface. That appears to not have been a very good assumption.
For the insomniacs in the crowd, The Energy Budget of the Polar Atmosphere in MERRA is a nice light read. There are pretty significant descrepancies that more than cover the conductive issue I am trying to quantify. The devil is in the details, but adapting the equation should shed some light on the issue. Poor satellite coverage is not helping at all. There is a significant lead in surface change over down welling change though that is interesting.
Saturday, October 22, 2011
The New Blog
I have been out of work, it is the off season in the Florida Keys and the economy does suck a bit. So I have been occupying my time dabbling in physics. Not the hard stuff, yet. Just the atmospheric physics stuff. You know, like is the globe warming, where and why, kinda stuff.
It is pretty interesting, but my other blog is loaded with all kinda things that aren't really physics. So I am starting this one to sort out the more scientific stuff. Being totally mercinary, I have installed a donate button up top for a new computer. Mine passed recently, hard drives are not too reliable in the humidity of the Florida Keys. So if that scares you off, TA Ta, don't let the blog hit you in the butt.
One of the projects, the conductive impact of carbon dioxide on Antarctic climate, aka, Another Shot at Explaining the Atmospheric Effect is on my other alternate energy blog. That blog seems to rub people the wrong way. I just happen to think Hydrogen is cool and makes sense if the price is right. Most science blog reader types are opinionated buttholes, I am no exception BTW, but I would rather Hydrogen not be a distraction. Being a redneck, in the eyes of the intellectual community, is distraction enough.
While most of the work I am doing doesn't require much more than paper and pencle, there are a few data bases that require a little more than a borrowed netbook. Since I need to compile a fairly large data base of surface temperatures and pressures to compare to mid tropospheric temperatures and pressure, a new computer would be nice. Not that I really need it, it would just speed things up.
Now before you decide whether I am a warmer, skeptic, denier, rejectionist, or any other overused lable, I am curious, that's about it, just curious. Classical physics accomplished quite a bit before the new modeling craze and I am sure classical physics still kicks butt. However, it is hard to overcome the virtual surrealality created by computer modeling, but I do believe I am close.
Now, the picture of the climate future with more carbon dioxide is not all that clear. There are a lot of feedbacks with various time delays, which are poorly illustrated in What the Heck is the 4C Boundary. While risking scaring potential readers and hopefully donaters off, the various thermal boundaries are key to resolving as much of the feedback picture as possible. A rather unproven, at atmospheric temperatures and pressures, application of Relativistic Heat Conduction appears to have a great deal of potential in that respect.
The largest hurdle is the poor visualiztion of main stream climate science of the overall problem. They, the main streamers, seemed to have completely lost their frame of reference and as a result, are up that smelly creek without a paddle.
If you care to follow or even participate, in resolving a very interesting puzzle, climate change appears to be the first in what may be a series of fun times.
So I am off to revise some older posts and move them over to their new home. I will add a link to the old blog until things are better organized.
Computers are a vailable tool, provided the problem is properly described and converted into code or input. As thermal boundary layers appear to be the largest cause of uncertainty, using the existing theories, like Prandtl's theory, Poisson's equations and Stefan's laws are reasonable blocks for the foundation for the program. Since my initial observations indicate the Antarctic is a strong driver of climate stability, using average initial values based on the Antarctic should reduce conversion error. When compared to tropical initial values, any discrepancies should be exposed early. The results so far, based on average global values appear acceptable, fine tuning to global extremes would be my logical next step.
So here is the first outline of the problem;
Four major vertical boundary layers, 4C, surface/air, Tropopause and Top of the Atmosphere (TOA) with three major horizontal zones, Arctic region, Tropical region and Antarctic region.
This is the simplest model of a complex fluid dymanics problem.
It is pretty interesting, but my other blog is loaded with all kinda things that aren't really physics. So I am starting this one to sort out the more scientific stuff. Being totally mercinary, I have installed a donate button up top for a new computer. Mine passed recently, hard drives are not too reliable in the humidity of the Florida Keys. So if that scares you off, TA Ta, don't let the blog hit you in the butt.
One of the projects, the conductive impact of carbon dioxide on Antarctic climate, aka, Another Shot at Explaining the Atmospheric Effect is on my other alternate energy blog. That blog seems to rub people the wrong way. I just happen to think Hydrogen is cool and makes sense if the price is right. Most science blog reader types are opinionated buttholes, I am no exception BTW, but I would rather Hydrogen not be a distraction. Being a redneck, in the eyes of the intellectual community, is distraction enough.
While most of the work I am doing doesn't require much more than paper and pencle, there are a few data bases that require a little more than a borrowed netbook. Since I need to compile a fairly large data base of surface temperatures and pressures to compare to mid tropospheric temperatures and pressure, a new computer would be nice. Not that I really need it, it would just speed things up.
Now before you decide whether I am a warmer, skeptic, denier, rejectionist, or any other overused lable, I am curious, that's about it, just curious. Classical physics accomplished quite a bit before the new modeling craze and I am sure classical physics still kicks butt. However, it is hard to overcome the virtual surrealality created by computer modeling, but I do believe I am close.
Now, the picture of the climate future with more carbon dioxide is not all that clear. There are a lot of feedbacks with various time delays, which are poorly illustrated in What the Heck is the 4C Boundary. While risking scaring potential readers and hopefully donaters off, the various thermal boundaries are key to resolving as much of the feedback picture as possible. A rather unproven, at atmospheric temperatures and pressures, application of Relativistic Heat Conduction appears to have a great deal of potential in that respect.
The largest hurdle is the poor visualiztion of main stream climate science of the overall problem. They, the main streamers, seemed to have completely lost their frame of reference and as a result, are up that smelly creek without a paddle.
If you care to follow or even participate, in resolving a very interesting puzzle, climate change appears to be the first in what may be a series of fun times.
So I am off to revise some older posts and move them over to their new home. I will add a link to the old blog until things are better organized.
Computers are a vailable tool, provided the problem is properly described and converted into code or input. As thermal boundary layers appear to be the largest cause of uncertainty, using the existing theories, like Prandtl's theory, Poisson's equations and Stefan's laws are reasonable blocks for the foundation for the program. Since my initial observations indicate the Antarctic is a strong driver of climate stability, using average initial values based on the Antarctic should reduce conversion error. When compared to tropical initial values, any discrepancies should be exposed early. The results so far, based on average global values appear acceptable, fine tuning to global extremes would be my logical next step.
So here is the first outline of the problem;
Four major vertical boundary layers, 4C, surface/air, Tropopause and Top of the Atmosphere (TOA) with three major horizontal zones, Arctic region, Tropical region and Antarctic region.
This is the simplest model of a complex fluid dymanics problem.
Thursday, October 20, 2011
Silly Quotes
Stolen from somebody:
Redneck Philosophy. "I have been doin' so much, with so little. for so long, I can do dang near anything with nothin'!"
"People like to feed their pet theories. I don't keep pets, they tend to die on ya'."
"Einstein said the Universe is simple. He was a friggin' genius though."
"I would rather not do complicated, let's keep it simple stupid!"
"The puzzle is only finished when all the pieces fit without using the hammer."
"Smart folks can be dumb too!"
"I never metaphysics I didn't like, some just are more fun and actually work."
Keys teeshirt: "Down of vacation, back on probation."
These may be original, "K&T attempted to do an Energy Imbalance, and succeeded."
"Non-Ergodic systems aren't really chaotic, they are just misunderstood."
"The more horsepower you have, the more trouble you can get into. That's redneck for entropy."
Redneck Philosophy. "I have been doin' so much, with so little. for so long, I can do dang near anything with nothin'!"
"People like to feed their pet theories. I don't keep pets, they tend to die on ya'."
"Einstein said the Universe is simple. He was a friggin' genius though."
"I would rather not do complicated, let's keep it simple stupid!"
"The puzzle is only finished when all the pieces fit without using the hammer."
"Smart folks can be dumb too!"
"I never metaphysics I didn't like, some just are more fun and actually work."
Keys teeshirt: "Down of vacation, back on probation."
These may be original, "K&T attempted to do an Energy Imbalance, and succeeded."
"Non-Ergodic systems aren't really chaotic, they are just misunderstood."
"The more horsepower you have, the more trouble you can get into. That's redneck for entropy."
Subscribe to:
Posts (Atom)