New Computer Fund

Sunday, November 13, 2011

Building a Better Model

Just about everything I have looked at recently is because I think I can build a better atmospheric model. Two dimensional modules are kernels in the models have their place, but unless the areas used in the two dimensional representations somewhat match the actual three dimensional relationships, they will fall apart early. Assigning the right areas for may not be that complicated as long as there are reasonable checks to determine the expected degree of error.

Balancing forces, which to me seems likely, would be part of the system of checks. In local equilibrium, that term that frightens so many, forces would be balance in all directions.

For example; some degree of surface warming would be balanced by some degree of atmospheric warming which would balance the degree of apparent TOA emission.

Using the assumption that the Earth is 30 degree warmer due to our atmosphere, 15 degrees of that warming could be balanced by 15 degrees of atmospheric warming balanced by 15 degrees of emission at the initial application of the thermal energies. One would feedback on the other dependent on the properties of the media which would change with the changes in energy flux. Since the surface can be assumed to have a one dimensional impact, i.e. it would radiate to space other than upwardly, it would be my choice of a frame of reference in the thermodynamic sense of the term.

While this choice of 30 degrees is arbitrary, properly considering the properties of the media, air decreasing in density and space with a relatively constant transmittance, the final values should approach reality.

The atmospheric initial value of 15 degrees, would be felt in all directions. In a flat plate, the horizontal components would balance. As the plate curved to match a segment of the spherical shell is represents, the horizontal components would begin to need consideration. That point would depend on the allowable magnitude of error. By allowing the area of the atmospheric segment to increase proportionally with the separation of that segment from its corresponding surface segment, the utility of the model should be extend somewhat.

Since the initial transmittance are unknown, an approximation of one layer would approach a value in another layer causing the next layer to approach its resultant value. Only when all layers properly balanced is the accurate value of the transmittance discovered.

For an initial approximation of 50% transmittance, 7.5K would be felt at the surface due to the initial 15 degrees in the atmosphere. Half of that returned to the atmospheric layer which would pass half and return half of that impact from the surface. This would be consistent with the Planck relationship. (add details as link)

In order for this 15C increase in atmospheric temperature to result in a final 30C impact on the surface, the transmittance would be approximately, 0.99652825. If I rounded that value to 0.9965, the final temperature would be 30.1684 or an error of approximately 0.2 degrees.

Since the estimates of the atmospheric effect vary somewhat, if we use the current estimate of 288K at the surface and the S-B equivalent temperature at the TOA of 254.5 @ 238 Wm-2 resulting in 33.5 C of with 16.75 would be one half, using the same approximation of transmittance, 0.9965, the resultant surface temperature would be 33.689 degrees instead of the more accurate 33.50001, using the full estimated 0.99652825. And people would why there are so many significant digits in the Planck constant?

Since temperatures were used and not flux values, that approximation, 0.99652825 would be the emissivity of the surface to a point in the atmosphere that could be approximated as the average source of the atmospheric effect.

If you are following along at home, perhaps you would like to use flux values?

Update: No, this is not the emissivity from the 15C source in the atmosphere to the surface. This is what the surface emissivity would be to balance a 15C source in the atmosphere if the emissivity between layers was constant. It is not constant, but the surface emissivity has to be considered in order to accurately calculate the effective emissivity between points in the atmosphere. I am having issues with the blog comments thing at the moment.

So where does this leave us? First the surface has an emissivity less than one. With an atmosphere, the return of the atmospheric absorption would have to follow the rules, no more than half of the absorbed radiation can be returned to the source assuming flat plates as the source. For a spherical source, no more than one quarter could be returned. And for interaction between atmospheric layers, each layer can return no more than half of its absorbed energy to the source layer.

Possibilities? Since the temperature thought experiment produced results remarkable similar to the actual average surface emissivity, a model based on temperature relationships between layers, the Atmospheric R values for example, should properly consider both conductive and radiant flux impacts. That may require a more in depth proof, but it agrees well with the modified Kimoto equation results.

This is a spread sheet I call the Dumb model. It still needs work, but since pooh pooh occurs, I am putting it online just to save what I have so far.

Of course, uploading to Google Documents changed the formats and I have caught one error so far from the translation from xls to Google.

Once I make sure those and my own silly mistakes are corrected. I can balance the layers, this night only by the way, to determine as exactly as possible what the tropopause average temperature should be. How? The difference in the effective radiating temperature of the Tropopause less the TOA flux should exactly equal the atmospheric window flux at that point in the atmosphere, which should be exactly equal to the atmospheric radiant effect flux. I am not sure if that is a bold statement or obvious, but I would expect it to be bold for those locked into the conventional wisdom. :)

No comments:

Post a Comment