Kevin,
It should be perfectly obvious that I reject the technique of extrapolating hindcasts to the future.
Let me suggest you re-read some of your own posts:
Linearity is used out of simplicity of very complex processes (so that this sort of thing can be done at all). But the sun-ocean-atmosphere system
is extremely complex. Until GCM's are able to show some skill (averaged temperature and precipitation) on a regional scale at shorter time periods, there is no reason to believe they have skill at longer time periods.
As I mentioned in a previous post, "model the past then apply to future" financial models (which are forecasting a far simpler system than sun-ocean-atmosphere) have yet to show skill in forecast mode. Last November 16th, this "mathematical" forecast was published:
www.wallstreetwindow.com/drupal/node/1125 . It forecast the Dow was going to reach 16,000 this year. Unless something very surprising happens, another financial model will have bitten the dust -- big time. I can show literally dozens and dozens of these. Because these can be easily verified they demonstrate, to me, that the technique of "pastcast verification" is fatally flawed.
Think about this: If you were in your 60's and walked into the hospital with chest pains, would you want the emergency room to
measure your blood pressure, your CBC (chemical blood count), your pulse and your blood oxygen and apply them to your prognosis? Would you then want them to take similar readings at hourly intervals after they apply treatment to determine whether you are, indeed, responding properly?
Or, would you rather have them treat you based on a model derived from studying heart attacks in 60-year olds over the past 20 years without learning your "initial conditions"? I suspect I know the answer.
You might wish to take a look at this:
www.forecastingprinciples.com/Public_Policy/WarmAudit31.pdf
as well as this...
http://climatesci.colorado.edu/2006...ate-science-to-canadian-policymakers-part-ii/
which includes the following passage:
“So people who make the statement that we can’t predict the weather even, let’s say, ten days in advance, so how can we possibly predict the climate a century in advance are talking about apples and oranges.” [testimony by Dr. Ian Rutherford]
This is a clear misrepresentation of weather and climate modeling. Climate models include weather processes as a subset of the model. Even in the context of claiming that “Climate is the statistics of weather where you do a lot of averaging over time”, Dr. Rutherford is not correct. When we talk about the weather today, we still use statistics such as the daily average temperature. With multi-decadal mean temperatures, we are just referring to an different (longer) statistical averaging time.
Moreover, to characterize climate as a boundary value problem ignores peer reviewed papers which illustrate that climate predition is very much an initial value problem. Just one example is
Claussen, M., C. Kubatzki, V. Brovkin, A. Ganopolski, P. Hoelzmann, H.-J. Pachur, Simulation of an abrupt change in Saharan vegetation in the mid-Holocene, Geophys. Res. Lett., 26(14), 2037-2040, 10.1029/1999GL900494, 1999.
Further examples are discussed in
Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38.
and
Pielke, R.A., 1998: Climate prediction as an initial value problem. Bull. Amer. Meteor. Soc., 79, 2743-2746.
Meteorologists that came before you and I started with barotropic models then baroclinic models, etc., always verifying against the real atmosphere. As the graphics from the Climate Science article show, there has been gradual improvement in those forecasts. These are
measured results. Measuring the results of forecasts in a standardized manner is science. One researcher is then able to build on the next and make a more accurate model and the process repeats.
We don't know which, if any, of the CGM's are any good because this type of systematic evalution is not made as both the forecasting principles paper and climate science article demonstrate. The model developers just have "faith" that their tweaks produce positive results.
I was exposed, during the 1970's, to some modellers who were (true story) shocked the verification statistics on the initial LFM II were worse than the LFM I. When presented with actual verification stats, their reply was the verification couldn't be correct because the LFM II's "physics are better."! We keep tweaking the physics of the GCM's but we don't know if we are making them more or less skillful because there are no forecasts to validate them against. There is none of the "building block" scientific process that has made numerical modeling for weather forecasting so successful.
I like the scientific process a whole lot. There may be a time when GCMs can make skillful, validated forecasts after meteorologically realistic initialization (as Trenberth calls for, read his whole piece). At that time, I become
intensely interested in the results 1, 10 and 20 years in the future.