New Computer Fund
Sunday, September 28, 2014
What is the "Right" Solar Reconstruction?
I have played around with Solar a few times and found some interesting things but there is nothing very persuasive considering no one is in any kind of agreement how solar really should be handled. My biggest issue is that solar has its strongest longer term impact in the near tropicial oceans, rough 40S to 40N and short term impact in the NH land areas from around 25N to 60N. There is also a "year" issue. Since a Solar year is not the same as an Earth year and months are a pretty Earth only related time frame, you get a better correlation between solar and the tropical ENSO region using a 27 month lag which would likely be better with some fraction of a month. Using daily data is a huge issue for my laptop, but would be the best way to go with things. Anytime you try to determine correlations between averaged data sets there will always be some issue that could be extremely critical if you are talking with the serious statistical guys.
Solar "TSI" is a bit of an issue also. You really should use spectral intensities which once you add that to daily or some number of hour records gets to be a huge database for the laptop tinkerer. On top of that, solar reconstructions require different scaling factors due to issue with different data types and lower atmosphere solar has a lot of background "noise" or maybe signal that could be related to atmospheric optical depth. It is one big ass can of worms.
Still, it is hard to avoid tinkering with solar correlations despite the issues. I did the above chart just for grins. The maximum correlation I could find between solar "TSI" anomaly and the "tropical" ocean SST (30S-30N is a bit more than just tropical) is 56% which is nothing to write home about. It is probably just enough to inspire more looking, but not enough to convince anyone of anything.
The 4 year lag used to tweak the correlation is about half of the settling time Steve Schwartz estimated for the bulk ocean layer. To me that would suggest that there is a fast response plus a lagged response of about equal intensity. I have combined different TSI lags in the past and can get just about any correlation I like up to about 87% or so depending on which surface temperature data set and area I pick. That gives me way too many degrees of freedom to trust. All is not lost though.
Since there are quite a few published papers noting links between ENSO and reconstructed TSI, the average tinkering Joe cannot be classified a complete whack job because he has "professional" company. The trick is coming up with a compelling rationalization .. er explanation, of why you used what in your "analysis".
I am sticking to my multiple lag theory because ocean heat transport would support a number of lags and sub surface insolation which highlights the tropicalesque ocean influence on long term climate. So instead of using Land and Ocean temperature data, ocean only should be a focus with a possible Ocean plus 8 year averaged Land hybrid temperature data set. That would basically incorporate the ocean settling lag making the "global" temperature series a bit less noisy without having to cherry pick an averaging period.
This is what MAY be my choice of data sets for comparison to solar. Note that I used a Pre-CO2 baseline, since ACO2 supposedly didn't kick in until ~1950. I used two different scales just for this comparison to show that my cherry picked, er selected regions tend to agree. I used the GISS 1250 km interpolated data though the 250km data would normally be my choice. Since the highest latitudes aren't included there isn't much difference. This chart is just to show that using just the 30S-30N tropical ocean, I am not losing touch with global temperatures as much as some might think. The tropical ocean would be the major source of global energy after all.
There is a good chance this is a far as this will ever go unless I bite the bullet and download daily data. In case you are wondering, the excluded high latitudes contribute about 18.5Wm-2 (north) and 13 Wm-2 (south) to the "global" effective energy. The rough average effective energy of the 60S-60N region is ~410 Wm-2 so the exclusion would add about 7.5% to the energy uncertainty versus about 12.5% to the temperature anomaly uncertainty. One of my pet peeves about manufacturing temperature data in the polar regions is that it artificially doubles the impact in some cases
Thursday, September 25, 2014
Reservoirs, Sinks and Weird Systems
Now that Lewis and Curry (L&C) have a new climate sensitivity paper out based on the boring old HADCRUT4 temperature data set, the Krigers are concerned that the ~0.064K difference between the kridged data, BEST and C&W might make a significant difference in the L&C results. The above drawing should illustrate the difference.
Up top you have a closed system where the average temperature more accurately represents the thermodynamic temperature of the reservoirs, hot and cold. Below you have a Idunno system where the average temperature could depend on the boundary selection. If you leave a big enough gap so the average temperature represents the average energy of each reservoir, you reduce the size, i.e. heat content, of the reservoirs. If you get the total heat capacity right, you can blur or smear the average temperature so that it is less likely to represent the average energy. In other words, there would be some unknown amount of internal sinking.
Thermodynamics allow the luxury/burden of selecting reservoirs, frames of reference, that are most likely in something very close to thermodynamic equilibrium. Then all the laws of thermodynamics apply, provided your frames of reference are close enough to an equilibrium and/or steady state so that there isn't a lot of unknown energy transfer.
In my opinion, if you don't know for sure, pick another or several other frames of reference so you can compare and contrast results of the various frames. That is perfectly logical to me, but then who am I really?
If I can only get the results I want in one frame, then I might be wrong. To defend my choice, I would have to cleverly make up excuses for every thing that happens that should not happen. Remind you of anything?
Now if the krigers do publish some new results, let's see which crew has more excuses :)
Up top you have a closed system where the average temperature more accurately represents the thermodynamic temperature of the reservoirs, hot and cold. Below you have a Idunno system where the average temperature could depend on the boundary selection. If you leave a big enough gap so the average temperature represents the average energy of each reservoir, you reduce the size, i.e. heat content, of the reservoirs. If you get the total heat capacity right, you can blur or smear the average temperature so that it is less likely to represent the average energy. In other words, there would be some unknown amount of internal sinking.
Thermodynamics allow the luxury/burden of selecting reservoirs, frames of reference, that are most likely in something very close to thermodynamic equilibrium. Then all the laws of thermodynamics apply, provided your frames of reference are close enough to an equilibrium and/or steady state so that there isn't a lot of unknown energy transfer.
In my opinion, if you don't know for sure, pick another or several other frames of reference so you can compare and contrast results of the various frames. That is perfectly logical to me, but then who am I really?
If I can only get the results I want in one frame, then I might be wrong. To defend my choice, I would have to cleverly make up excuses for every thing that happens that should not happen. Remind you of anything?
Now if the krigers do publish some new results, let's see which crew has more excuses :)
Sunday, September 21, 2014
"Believers and "Hoaxers" will likely Attack Steve E. Koonin's Position on Climate Change
Steve E. Koonin, a theoretical physicist of some standing in the community, has an essay, in the Wall Street Journal on climate change aka Global Warming, entitled, Climate Science is not Settled. It could have been called Climate Science is not Settled nor is it a Hoax. Then it would be easier to understand why it will catch flak from both extremes of the climate change debate. If you are trying to figure out which factions are clueless in the debate, just look for Koonin bashers. In the essay Koonin lists three challenging fundamentals.
The first, Even though human influences could have serious consequences for the climate, they are physically small in relation to the climate system as a whole. For example, human additions to carbon dioxide in the atmosphere by the middle of the 21st century are expected to directly shift the atmosphere's natural greenhouse effect by only 1% to 2%. Since the climate system is highly variable on its own, that smallness sets a very high bar for confidently projecting the consequences of human influences.
If you consider that the "normal" greenhouse effect produces a lower atmospheric average temperature of about 4 C degrees, the impact of a "normal" GHE would be about 334 Wm-2. A 1% increase would be 3.4 Wm-2 and a 2% increase would be 6.8 Wm-2. That is roughly the range of impact based only on the CO2 portion of the anthropogenic changes to the atmosphere. If you are looking for impact in terms of temperature, then the "average" change in temperature would be 0.7 C for the 1% and 1.4 C for the 2% impacts. Since this is based only of the CO2 change, these would be "no feedback" estimates for the Greenhouse Effect.
A second challenge to "knowing" future climate is today's poor understanding of the oceans. The oceans, which change over decades and centuries, hold most of the climate's heat and strongly influence the atmosphere. Unfortunately, precise, comprehensive observations of the oceans are available only for the past few decades; the reliable record is still far too short to adequately understand how the oceans will change and how that will affect climate.
There is currently some controversy surrounding the lower than anticipated rise in "average" global surface temperatures. This hiatus, pause, slowdown or hiccup has been contributed to a variety of potential "causes", but the most recognized is a change in the rate of ocean heat uptake. Since the "average" energy of the global oceans would be related to the "average" temperature of the global oceans which is about 4 C degrees, no feedback on the global oceans should be about the same as the no feedback impact on the lower troposphere, i.e. global average Down Welling Longwave Radiation (DWLR) which is roughly estimated to be 334 Wm-2. Challenge one and two are likely linked.
A third fundamental challenge arises from feedbacks that can dramatically amplify or mute the climate's response to human and natural influences. One important feedback, which is thought to approximately double the direct heating effect of carbon dioxide, involves water vapor, clouds and temperature.
Atmospheric water vapor and clouds are directly related to the ocean and lower troposphere temperatures, absolute temperatures not anomalies, so water vapor and cloud "feedback" would be related to any cause of temperature change, not just changes "caused" by CO2. The third challenge is directly related to the first two challenges.
If you follow the "believers" of dangerous Anthropogenic Global Warming which has been repackage with terms like "climate change" or "climate disruption", they will most likely point out fallacies in Koonin's essay that are related to "beliefs" that all warming and all feedbacks are due to anthropogenic "causes". If you follow the "hoaxers", they will argue that the "physics" violates some law of thermodynamics or that there is no direct, indisputable measurement of CO2 impact.
Koonin's essay should equally piss both extremes off which is in my mind a great scientific and social evaluation of the issue. So anyone that vehemently disagrees with Koonin is likely a whack job or has a political ax to grind.
Those less agenda driven will notice that there are three "states" that would need to be in thermodynamic equilibrium for "standard" physics to easily apply, the ocean temperature would have to be in equilibrium with the lower atmospheric temperature while atmospheric water vapor and cloud conditions would have to be in equilibrium with both of the other two conditions.
If a body A, be in thermal equilibrium with two other bodies, B and C, then B and C are in thermal equilibrium with one another. Is a simple way to state the zeroth law of thermodynamic. That would be the only "law" of thermodynamics that might be violated in the climate change debate. What it boils down to is you have to know the "normal" condition of the atmosphere, oceans and cloud cover if you are going to determine impact of any change in any of the "initial" conditions. If you pick a variety of "initial" conditions and get a variety of answers that are inconsistent, then you didn't have the Zeroth Law equilibrium requirements met or your theory is wrong. The smaller the range of inconsistencies, the less wrong you are likely to be. The first estimate is the 1% to 2% "no feedback" or all other things remaining equal condition of 0.7 to 1.4 C degrees, is the one to beat.
The first, Even though human influences could have serious consequences for the climate, they are physically small in relation to the climate system as a whole. For example, human additions to carbon dioxide in the atmosphere by the middle of the 21st century are expected to directly shift the atmosphere's natural greenhouse effect by only 1% to 2%. Since the climate system is highly variable on its own, that smallness sets a very high bar for confidently projecting the consequences of human influences.
If you consider that the "normal" greenhouse effect produces a lower atmospheric average temperature of about 4 C degrees, the impact of a "normal" GHE would be about 334 Wm-2. A 1% increase would be 3.4 Wm-2 and a 2% increase would be 6.8 Wm-2. That is roughly the range of impact based only on the CO2 portion of the anthropogenic changes to the atmosphere. If you are looking for impact in terms of temperature, then the "average" change in temperature would be 0.7 C for the 1% and 1.4 C for the 2% impacts. Since this is based only of the CO2 change, these would be "no feedback" estimates for the Greenhouse Effect.
A second challenge to "knowing" future climate is today's poor understanding of the oceans. The oceans, which change over decades and centuries, hold most of the climate's heat and strongly influence the atmosphere. Unfortunately, precise, comprehensive observations of the oceans are available only for the past few decades; the reliable record is still far too short to adequately understand how the oceans will change and how that will affect climate.
There is currently some controversy surrounding the lower than anticipated rise in "average" global surface temperatures. This hiatus, pause, slowdown or hiccup has been contributed to a variety of potential "causes", but the most recognized is a change in the rate of ocean heat uptake. Since the "average" energy of the global oceans would be related to the "average" temperature of the global oceans which is about 4 C degrees, no feedback on the global oceans should be about the same as the no feedback impact on the lower troposphere, i.e. global average Down Welling Longwave Radiation (DWLR) which is roughly estimated to be 334 Wm-2. Challenge one and two are likely linked.
A third fundamental challenge arises from feedbacks that can dramatically amplify or mute the climate's response to human and natural influences. One important feedback, which is thought to approximately double the direct heating effect of carbon dioxide, involves water vapor, clouds and temperature.
Atmospheric water vapor and clouds are directly related to the ocean and lower troposphere temperatures, absolute temperatures not anomalies, so water vapor and cloud "feedback" would be related to any cause of temperature change, not just changes "caused" by CO2. The third challenge is directly related to the first two challenges.
If you follow the "believers" of dangerous Anthropogenic Global Warming which has been repackage with terms like "climate change" or "climate disruption", they will most likely point out fallacies in Koonin's essay that are related to "beliefs" that all warming and all feedbacks are due to anthropogenic "causes". If you follow the "hoaxers", they will argue that the "physics" violates some law of thermodynamics or that there is no direct, indisputable measurement of CO2 impact.
Koonin's essay should equally piss both extremes off which is in my mind a great scientific and social evaluation of the issue. So anyone that vehemently disagrees with Koonin is likely a whack job or has a political ax to grind.
Those less agenda driven will notice that there are three "states" that would need to be in thermodynamic equilibrium for "standard" physics to easily apply, the ocean temperature would have to be in equilibrium with the lower atmospheric temperature while atmospheric water vapor and cloud conditions would have to be in equilibrium with both of the other two conditions.
If a body A, be in thermal equilibrium with two other bodies, B and C, then B and C are in thermal equilibrium with one another. Is a simple way to state the zeroth law of thermodynamic. That would be the only "law" of thermodynamics that might be violated in the climate change debate. What it boils down to is you have to know the "normal" condition of the atmosphere, oceans and cloud cover if you are going to determine impact of any change in any of the "initial" conditions. If you pick a variety of "initial" conditions and get a variety of answers that are inconsistent, then you didn't have the Zeroth Law equilibrium requirements met or your theory is wrong. The smaller the range of inconsistencies, the less wrong you are likely to be. The first estimate is the 1% to 2% "no feedback" or all other things remaining equal condition of 0.7 to 1.4 C degrees, is the one to beat.
Tuesday, September 16, 2014
Those Pesky Clouds
There is a new article out on those pesky Arctic clouds that climate models and Earth Energy Budget estimates don't even come close to getting right. This is the Abstract:
Abstract
This study demonstrates that absorbed solar radiation (ASR) at the top of the atmosphere in early summer (May–July) plays a precursory role in determining the Arctic sea ice concentration (SIC) in late summer (August–October). The monthly ASR anomalies are obtained over the Arctic Ocean (65°N–90°N) from the Clouds and the Earth's Radiant Energy System during 2000–2013. The ASR changes primarily with cloud variation. We found that the ASR anomaly in early summer is significantly correlated with the SIC anomaly in late summer (correlation coefficient, r ≈ −0.8 with a lag of 1 to 4 months). The region exhibiting high (low) ASR anomalies and low (high) SIC anomalies varies yearly. The possible reason is that the solar heat input to ice is most effectively affected by the cloud shielding effect under the maximum TOA solar radiation in June and amplified by the ice-albedo feedback. This intimate delayed ASR-SIC relationship is not represented in most of current climate models. Rather, the models tend to over-emphasize internal sea ice processes in summer.
Every since I noticed the Earth Energy Budget screw up I have focused on mixed phase clouds, the "atmospheric window" and the very simple approximation of incident Solar radiation. Liquid topped mixed phase clouds produce a radiant "ground plane" of sorts that really should be treated as a different "surface". The models and the "experts" mention that sea ice melt would increase the albedo of the polar oceans allowing greater ocean heat uptake. That is true, but direct solar isn't really the issue in the Arctic due to the low solar angle of incidence. Thanks to the atmospheric lens effect and the liquid water surface of the mixed phase clouds, how much the lensing, mixed phase cloud area and sea ice area all interact to increase or decrease Arctic ocean heat content is one hell of a nifty puzzle. Older papers I have perused indicate around 18Wm-2 of uncertainty which is about what I estimated from the older K&T energy budgets. Soon to be if not already Dr. Barrett's thesis indicated that "global" liquid topped mixed phase clouds cover around 7.8 percent of the surface at any given time. For a quick estimate, that could produce around 15 Wm-2 +/- 10 of uncertainty which as you can see is a pretty large WAG. It could be that mixed phase clouds account for the majority of the model error. Don't know of course, but it is in the ballpark.
The models wouldn't just rely on the crude estimates tossed around by the online groupies. They would attempt to use actual incident radiation from both the sun and the clouds, provided they get the clouds right. Mixed phase clouds, they obviously don't, but it looks like the other types of clouds might not be as far off as some suspect.
I haven't sprung to read the article, but once I can find a free copy I probably will. Until then Watts Up With That, the Hockey Sctick and a few other more skeptic types of sites have reports that I really can't confirm. Anywho, the interesting thing to me is that the models might be repairable. The question though is if the modelers will actually read and incorporate these newer developments or just continue with their old song and dance. Since that might involve eating some crow, I wouldn't hold my breath.
Abstract
This study demonstrates that absorbed solar radiation (ASR) at the top of the atmosphere in early summer (May–July) plays a precursory role in determining the Arctic sea ice concentration (SIC) in late summer (August–October). The monthly ASR anomalies are obtained over the Arctic Ocean (65°N–90°N) from the Clouds and the Earth's Radiant Energy System during 2000–2013. The ASR changes primarily with cloud variation. We found that the ASR anomaly in early summer is significantly correlated with the SIC anomaly in late summer (correlation coefficient, r ≈ −0.8 with a lag of 1 to 4 months). The region exhibiting high (low) ASR anomalies and low (high) SIC anomalies varies yearly. The possible reason is that the solar heat input to ice is most effectively affected by the cloud shielding effect under the maximum TOA solar radiation in June and amplified by the ice-albedo feedback. This intimate delayed ASR-SIC relationship is not represented in most of current climate models. Rather, the models tend to over-emphasize internal sea ice processes in summer.
Every since I noticed the Earth Energy Budget screw up I have focused on mixed phase clouds, the "atmospheric window" and the very simple approximation of incident Solar radiation. Liquid topped mixed phase clouds produce a radiant "ground plane" of sorts that really should be treated as a different "surface". The models and the "experts" mention that sea ice melt would increase the albedo of the polar oceans allowing greater ocean heat uptake. That is true, but direct solar isn't really the issue in the Arctic due to the low solar angle of incidence. Thanks to the atmospheric lens effect and the liquid water surface of the mixed phase clouds, how much the lensing, mixed phase cloud area and sea ice area all interact to increase or decrease Arctic ocean heat content is one hell of a nifty puzzle. Older papers I have perused indicate around 18Wm-2 of uncertainty which is about what I estimated from the older K&T energy budgets. Soon to be if not already Dr. Barrett's thesis indicated that "global" liquid topped mixed phase clouds cover around 7.8 percent of the surface at any given time. For a quick estimate, that could produce around 15 Wm-2 +/- 10 of uncertainty which as you can see is a pretty large WAG. It could be that mixed phase clouds account for the majority of the model error. Don't know of course, but it is in the ballpark.
The models wouldn't just rely on the crude estimates tossed around by the online groupies. They would attempt to use actual incident radiation from both the sun and the clouds, provided they get the clouds right. Mixed phase clouds, they obviously don't, but it looks like the other types of clouds might not be as far off as some suspect.
I haven't sprung to read the article, but once I can find a free copy I probably will. Until then Watts Up With That, the Hockey Sctick and a few other more skeptic types of sites have reports that I really can't confirm. Anywho, the interesting thing to me is that the models might be repairable. The question though is if the modelers will actually read and incorporate these newer developments or just continue with their old song and dance. Since that might involve eating some crow, I wouldn't hold my breath.
Does "Surface" Temperature Confuse the Hell Out of People?
Pretty much. This chart compares the difference between the GISS LOTI product and the RSS Lower Troposphere product. The GISS product interpolates large regions of the poles that have very limited coverage and the RSS product doesn't include a small portion of the poles due to orbital/imaging limits. The most obvious issue is that the northern hemisphere comparison is diverging significantly while the southern hemisphere agrees very well. Most of the difference is likely due to polar interpolation of very low temperatures that have much less related energy than the mid and lower latitude temperatures. This goes back to the zeroth law issue where the greater the temperature range and variation of the data averaged the less likely that average temperature will faithfully represent average energy.
This has been mentioned so often that it is one of those issues that is refused to be considered because it has been "settled". Settled means it has to become so glaringly obvious that the geniuses cannot possibly defend their simplistic dismissals. So if you bring it up too soon, you can expect a psychological analysis of your online history and likely a few more serious insults if you are in a position of scientific authority.
Saturday, September 13, 2014
Perspective
What if there were no thermometers only Wattmeters on Earth? Instead of people inventing temperature measuring devices, they jumped right to energy. Then we would have bank signs like 405 W 12:01 pm. When you see 315 W 6:00 pm, you know to protect your plants and pets, it's gonna get low energy tonight. We would keep on keeping on with our linear little lives and physics would be a cakewalk in high school. Just about everything would appear linear relative to our exponentially skewed perception.
That is a graph from the KNMI Climate Explorer website. A while back I went to a lot of trouble down loading the Reynolds Optimally Interpolate v2 data by 5 degree bands of ocean. Then I converted each band into "effective" energy using the Stefan-Boltzmann relationship to plot a "global" SST "energy" profile. Today I decided to do something a little different and determine an "average" global energy using all the actual ocean areas per 5 degree band and I came up with about 410 Wm-2. That's great for me, but let's say I have an out of planet visitor that would really like to see the temperature instead of the energy. He is from one of the slower planet on the far side of the Milky Way. So I plot him out a chart of temperature using my energy data.
It loot about the same but it isn't quite. My average temperature is about 18.8 C degrees and his is about 18.3 C. He get's a bit confused, but I tell him temperature really isn't a big thing around here because it is really all about the energy. "But there are much larger changes in the temperature!" he says. "So?"
If I didn't include the surface area adjustment, the average energy would have be around 390 Wm-2 which is close to the "average" energy of the surface. Land and sea ice don't store much energy so they vary all over the place. The area between latitudes 60S and 60N is close to 87% of the total surface and has most of the energy. That would be the dog, the polar regions the tail.
If I crop the tails a bit, his temperature and my energy converted to temperature look about the same. Mine still is a bit less variable, but the averages are in the same ballpark. The tails (poles) just add to the confusion in the conversion process.
That is a graph from the KNMI Climate Explorer website. A while back I went to a lot of trouble down loading the Reynolds Optimally Interpolate v2 data by 5 degree bands of ocean. Then I converted each band into "effective" energy using the Stefan-Boltzmann relationship to plot a "global" SST "energy" profile. Today I decided to do something a little different and determine an "average" global energy using all the actual ocean areas per 5 degree band and I came up with about 410 Wm-2. That's great for me, but let's say I have an out of planet visitor that would really like to see the temperature instead of the energy. He is from one of the slower planet on the far side of the Milky Way. So I plot him out a chart of temperature using my energy data.
It loot about the same but it isn't quite. My average temperature is about 18.8 C degrees and his is about 18.3 C. He get's a bit confused, but I tell him temperature really isn't a big thing around here because it is really all about the energy. "But there are much larger changes in the temperature!" he says. "So?"
If I didn't include the surface area adjustment, the average energy would have be around 390 Wm-2 which is close to the "average" energy of the surface. Land and sea ice don't store much energy so they vary all over the place. The area between latitudes 60S and 60N is close to 87% of the total surface and has most of the energy. That would be the dog, the polar regions the tail.
If I crop the tails a bit, his temperature and my energy converted to temperature look about the same. Mine still is a bit less variable, but the averages are in the same ballpark. The tails (poles) just add to the confusion in the conversion process.
That plot is the "global" lower troposphere temperature in Kelvin. the average temperature is about 271 or about -2C degrees or about 309 W m-2 "effective" energy. Peak to valley it varies more than three degrees. Start to finish, not so much. The interesting thing about the RSS MSU method is they start with and energy indication, brightness I believe, then convert that energy indication into temperature. It they produced an "effective" energy product, then you would have a direct comparison to changes in "forcing" without all the conversions.
Since the average "effective" energy of the oceans is about 410 Wm-2 but the oceans only cover about 70% of the "surface", at the lower troposphere which is suppose to cover 100% of the "surface" the ocean energy would be reduced to 0.71*410=291 Wm-2 which is little less than the "effective" energy of the RSS data, that really doesn't quite cover the globe, nor does the Reynolds data, plus you always have that silly sea ice issue of course.
On a Wattmeter world though, I doubt we would have as much of a "global" warming problem.
.
Friday, September 12, 2014
Fight Night - Round 2
When clouds form they could become ice clouds or water clouds, which should depend mainly on temperature/altitude. There would be a transition region where the clouds could be mixed phase. Depending how much of the clouds are mixed phase and how long they remain in a mixed phase, the transition could be negligible or not. Then whether or not some additional treatment other than classical cloud condensation/nucleation models is need really depends on how much mixed phase clouds impact model precision.
I like referencing masters and doctoral theses because the candidates generally explain things in greater depth.
"This sensitivity of climate models to mixed-phase cloud specification has been confirmed by other papers; Senior and Mitchell (1993) used 4 different cloud schemes in a GCM and this gave values for the climate sensitivity parameter, λ, between 0.45 and 1.29 ◦C (W m−2)−1. It was also reported that the simulation with the cloud water scheme had a negative cloud feedback as temperature increased as less cloud water was in the ice phase and therefore did not precipitate. This effect was further increased when an interactive radiation scheme was included that treated liquid and ice cloud separately (Senior and Mitchell, 1993). Also, by changing the range of temperature in which mixed-phase clouds can exist significantly changed the radiation budget (Gregory and Morris, 1996). GCMs are also sensitive to the cloud altitude and how the liquid and ice are assumed to be mixed within the cloud as the cloud albedo is very dependent on the phase of the cloud condensate (Sun and Shine, 1994, 1995)."
From the Thesis of Andrew Barrett at the University of Reading in the UK. It would appear that the impact of mixed phase clouds is significant enough to consider.
The mixed phase clouds tend to be liquid topped producing a super saturated water radiant surface at a colder temperature while the ice particles fall below the super saturated liquid top. Unexpectedly, these mixed phased clouds can persist for many hours and potentially days at colder temperatures than "classical" treatments would indicate.
This issue by the way is the reason this blog exists. When I first noted the discrepancies in the Keihl and Trenberth Earth Energy Budgets, the missing energy was due to clouds reducing the 40 Wm-2 atmospheric window to about 22 Wm-2 or an 18 Wm-2 error compared to a potential 3.7 Wm-2 "forcing". It should be so obvious it cannot be ignored, but never underestimate how stubborn academics can be.
webster, the king of stubborn, writes this, "In the text, they applied the theory of Bose-Einstein statistics to model the condensation (8.2.3) and freezing (8.3.2) nucleation rates of water vapor. Their theory competes with the classically described mechanism that occurs in the creation of clouds, which is a nucleation of water vapor into water droplets (low clouds such as cumulus) or ice crystals (high altitude clouds such as cirrus)." That is in his appropriately titled Crackpots Etc. post. He must think that ignoring the problem of mixed phase clouds and slinging insults will make them disappear.
I like referencing masters and doctoral theses because the candidates generally explain things in greater depth.
"This sensitivity of climate models to mixed-phase cloud specification has been confirmed by other papers; Senior and Mitchell (1993) used 4 different cloud schemes in a GCM and this gave values for the climate sensitivity parameter, λ, between 0.45 and 1.29 ◦C (W m−2)−1. It was also reported that the simulation with the cloud water scheme had a negative cloud feedback as temperature increased as less cloud water was in the ice phase and therefore did not precipitate. This effect was further increased when an interactive radiation scheme was included that treated liquid and ice cloud separately (Senior and Mitchell, 1993). Also, by changing the range of temperature in which mixed-phase clouds can exist significantly changed the radiation budget (Gregory and Morris, 1996). GCMs are also sensitive to the cloud altitude and how the liquid and ice are assumed to be mixed within the cloud as the cloud albedo is very dependent on the phase of the cloud condensate (Sun and Shine, 1994, 1995)."
From the Thesis of Andrew Barrett at the University of Reading in the UK. It would appear that the impact of mixed phase clouds is significant enough to consider.
The mixed phase clouds tend to be liquid topped producing a super saturated water radiant surface at a colder temperature while the ice particles fall below the super saturated liquid top. Unexpectedly, these mixed phased clouds can persist for many hours and potentially days at colder temperatures than "classical" treatments would indicate.
This issue by the way is the reason this blog exists. When I first noted the discrepancies in the Keihl and Trenberth Earth Energy Budgets, the missing energy was due to clouds reducing the 40 Wm-2 atmospheric window to about 22 Wm-2 or an 18 Wm-2 error compared to a potential 3.7 Wm-2 "forcing". It should be so obvious it cannot be ignored, but never underestimate how stubborn academics can be.
webster, the king of stubborn, writes this, "In the text, they applied the theory of Bose-Einstein statistics to model the condensation (8.2.3) and freezing (8.3.2) nucleation rates of water vapor. Their theory competes with the classically described mechanism that occurs in the creation of clouds, which is a nucleation of water vapor into water droplets (low clouds such as cumulus) or ice crystals (high altitude clouds such as cirrus)." That is in his appropriately titled Crackpots Etc. post. He must think that ignoring the problem of mixed phase clouds and slinging insults will make them disappear.
Thursday, September 11, 2014
Fight Night - Max and Boltz versus the Bose and 'Steiner - Be there or be indistinguishable
A mind is a terrible thing to waste they say. With that in mind, I wander around a few "sciency" blogs, websites and fora in an attempt to knock some of the cobwebs out of the corners of my mind. Sometimes visits lead me to believe that there may not be any intelligent life left on Earth. One recent example is a wild, completely irrelevant discussion on a suggestion in a new textbook. The suggestion was that Bose-Einstein Statistics might be useful when attempting to describe cloud formation at low temperatures. Clouds tend to be a beautiful PITA at times since they appear to have their own personal drummer. On Earth most clouds are made of water in its various stages, solid, liquid and gas with the solid and liquid being the parts we generally see. When water vapor becomes liquid, it is considered to condense. When water vapor becomes ice it is considered to nucleate. When ice directly changes to water vapor it is considered to sublimate. When water is well below its freezing point it is consider super cooled and when water vapor is present in greater amount that it should be based on saturation pressure it is considered super saturated. Predicting the path that water vapor will take in a cloud based on just temperature and/or energy gets to be a bit difficult. If you can't predict what path it will follow, you may be able to determine a probability that it will follow a particular path which will give you some idea how likely you can pick the right path. If the way you try to predict tends to be useless or close to useless, you might want to consider a different approach.
To better understand why you might want to try a different approach, it would be nice to understand a few approaches. So I went looking for an fairly simple description of applicable statistics and found this:
where I used the Boltzmann distribution. However, the probabilities that the number of particles in the given one-particle state is equal to
These conditions are obviously solved by
which implies that the expectation value of
The calculation for bosons is analogous except that the Pauli exclusion principle doesn't restrict
and
These conditions are solved by
Note that the ratio of the adjacent
because the number of particles, an integer, must be weighted by the probability of each such possibility. The denominator is still inherited from the denominator of
Don't forget that
This result's denominator has a second power. One of the copies gets cancelled with the denominator before and the result is therefore
which is the Bose-Einstein distribution.
I made the whole explanation a link to the comment of Luboš Motl who could be described as a quantum physics junkies go to dealer.
Given the tools Lubos describe, how could one possibly use Bose-Einstein Statistics for water vapor?
One simple example would be to consider the limits of T as they relate to water. At -43 C degrees, water vapor is super cooled to the point that it will change to ice, but there is a fairly large time window of hours. So if you replace T with (T-T(-43C)) you would be indicating that the probability that water will be in its solid state is 100% at -43C degrees. However, since that isn't really true because of the time window, you could pick a lower temperature that makes the probability more likely for a smaller time window. Say minus 50 C for example. That could be useful even though it is not completely true.
The suggestion boils down to if the "accepted" methods don't provide enough accuracy consider options like Bose-Einstein, which created a tempest in a teacup. Personally, I found the suggestion interesting and the controversy inspired me to review tidbits lurking behind some of the cobwebs. i
Subscribe to:
Posts (Atom)