• Stormtrack's forum runs on Xenforo forum software, which will be undergoing a major update the evening of Wednesday, Feb 28th. The site may be down for a period while that update takes place.

UK Met Office maps of 2070-2100 weather

The UK Met Office Hadley Centre has published a page showing expected temperatures about 75 years from now based on the best models of ocean-atmosphere interactions and given emissions trends. Speculative, of course, but interesting.

http://www.metoffice.com/research/hadleycentre/models/modeldata.html

If you want heat and agony, though, there's no need to go out for fine dining when burgers are already on the grill:

http://drought.unl.edu/dm/monitor.html

What is this, like year 3 of the great Texas drought?

Tim
 
A simple question: If these models are so good, why can't they produce skillful forecasts at 70 days? And, if they cannot consistently produce skillful 70 day forecasts, why do we give them any credibility at 70 years?
 
Last edited by a moderator:
Not such a simple question, I think. The implicit assumption is that short-term variability is greater than long-term variability. That's pretty clearly true of weather vs. short-term climate per your (serious) comment.

But when you get into the realm of these long-range atmo-ocean coupled models, that's a whole other ball game. And I think they recognize it with rather wide significance windows. They try to refine them with backcasting. I'd say, though, that it's pretty hard to get a good handle on some phenomena that have just started to become significant.

But what is this saying? I think it's saying that the actualized magnitude of things they don't have a good handle on is likely greater than the things they do. So we may either slow-cook, parboil, or fry -- TBD. IMO in no way does this uncertainty diminish the importance of the work.
 
Global Warming....think of the alternative!

Do you think these models take into account the potential global warming spoilers? Like a shutdown of the Gulf Stream, or a large volcanic eruption like Tambora/Krakatoa?
 
Actually, I think it is simple.

Unless something has changed recently, none of the climate models accurately reproduced the mid-20th Century cooling without being "tuned" to do so. If they don't accurately reproduce past weather without fudging, why do we think they will accurately forecast the future?

If a model can forecast the weather 70 years into the future, it should be able to forecast the weather seven months into the future. Since most everyone agrees they cannot, why do we want to bet $1,000,000,000,000 that they are correct? See: http://science.infoshop.org/index.php?name=News&file=article&sid=255 Consider all the ways that money can be spent in ways that are CERTAIN to have a positive return on investment.

Fact: The sun has burned "hotter" (see Max Planc Institute studies) the last 20 years than at any time in the last millenia. We have no theory for predicting (let alone a methodology for doing so) how solar energy will change in the next 5 years. We don't understand (let alone model) the radiative properties of clouds. We can't forecast volanoes. The rapid ocean cooling of the last year was not forecast in advance.

Remember: NOAA and Gray thought 2006 was going to be a bad hurricane season. That forecast was for a period 30 days to 200 days into the future (the length of the hurricane season). Those forecasts were busts. I state this not to criticize those forecasters. I point this out to illustrate what I believe is the folly of forecasting the weather 36,000 days into the future.

Mike
 
Fudging is a good word to use. One of the greatest physicists of the 20th century decided to fudge (include) a number to make sure things looked like he wanted them to look, and regretted that mistake later. Not the same as this climate "issue" of course, but it does apply to one, or a group of people, making things look the way they want them to look.

Like Mike Smith wrote, no person can predict future "solar heat ouput" with any degree of accuracy. This same thinking also applies to accurate weather prediction beyond a certain number of days.. Seasonal outlook accuracy is little better than a five year old flipping a coin and being told to forecast warm if heads, cold if tails. Now UK Met is going to tell all of us how things will be 36000 days in the future... get real! North Korea could nuke the planet before then - forgive me for drifting, but then again, I'm no fan of Nostradamus either:)

Expect a positive charge, get a positive charge. Expect to see global heat, get global heat. I realize that is a terrible perversion of a wonderfully accurate theory. Who is to say that that isn't precisely what UK MET has done?
 
Last edited by a moderator:
Mike,

Forecasting the number and intensity of hurricanes is VERY different than forecasting average temperatures decades down the road. These global climate models do not model (by design and intention) 'weather' at specific locations in hopes of 'predicting the weather on August 5th, 2085'. There is a lot of parameterization involved, but the goal is to model average conditions. High-frequency variability doesn't matter much like it does in
day-to-day forecasting that we're most familiar with. In short-term forecasts (a few days), there can be a lot of variability (both in space and time). If we average everything out over a relatively large area over a relatively large time, however, this small-scale variability disappears (it "averages out"). In the long-term, global energy balance drives the climate and global weather patterns, and it'd be quite an event for a single, day-long local weather event to significant affect the climate of an entire region.

Yes, there is a lot of politics behind the global warming campaign (and the fact that some in Hollywood are pushing it as if it's a fad) does not mean that it isn't happening. Actually, I should rephrase that -- few argue that global warming ISN'T occurring. The vast majority of data show that the globe is warming. Now, the biggest point of contention is whether human activity is having a noticeable and appreciable impact on global warming. I recently saw a study in from NCAR meteorologists that noted that, per their work, the world could not have warmed as quickly and much as it has by natural influences alone. Say what you will, but the majority of global collaborative studies have concluded that global warming is being aided by anthropogenic factors. There seem to be a lot more studies that support the idea of humans significantly influencing global warming than the number of studies that show otherwise. Of course, this doesn't mean that humans are influencing the global weather, but I certainly think it's a "better than not" situation.


The response to global warming is highly non-linear, so an increase in average global temperature does not mean that everyone will see warmer weather. Likewise, that does not mean that the world will continue to warm. The ocean / thermohaline circulation is quite complex, and there are a lot of secondary effects that may produce results directly opposite of those that originally seem obvious. Heck, at some point, the warm may get so warm that it triggers some other highly non-linear process that causes a rapid cool-down. Or, we could just hope for one or two massive eruptions from a supervolcano that could cast us into an ice age (as has happened in the past).
 
I would like to point out that my own seasonal remarks from above refer particularly to climatic temperature “predictions.” I was not trying to suggest in any way tropical cyclone predictions are similar in accuracy.
 
Jeff,

Thank you for your response, but you are not telling me anything regarding the models I did not know. Please re-read my response. At no point did I express an opinion regarding whether global warming is real or whether humans are driving climate change. My comments were directed to what appears to be an unwarranted belief in the skill of these ultra long range forecasts -- even though they have been less than terrific in "pastcast" mode.

A challenge: Let's have the most sophisticated model start making daily forecasts for the mean conditions from day 150 to day 180 for several specific cities and start verifying them for a year or two. Not the daily weather, but the 30 day mean temperature and precipitation departure from normal for each location. If your premise is correct that it "averages out" then the mean forecasts will be consistently skillful. Want to make any bets as to the outcome?

You say of NCAR's work: " the world could not have warmed as quickly and much as it has by natural influences alone." Given the uncertainties I just listed (solar output, radative forces, the oceans' cooling not predicted, among many others), how could anyone say "could not"?!

Various global warming scientists have forecast warning, cooling, drought and flood (I can document this comment), so essentially, at least one their forecasts are correct no matter what happens. Vindication!

For the record, I am a human-induced global warming agnostic. I am perfectly willing to believe it when there is good science that indicates it. But 36,000 day forecasts that have not been proven skillful in a FORECAST, let alone pastcast, mode are not at all convincing to me.

Again, thank you for the response.

Mike
 
Last edited by a moderator:
Over time, any forecasting model worth its jock should work out to climatology (for temperature, precip. etc.) for any given location. It may miss events (resulting in bad forecats in the short term), but over the long term it should be correct (for climate). The climate models aren't simply taking WRF/GFS to 100000000000 hours and saying yes, the climate will be warming.

As far as global warming goes, I'm not going to get into an indepth discussion but look at CO2 graphs over time. CO2 flux lines up nicely with warmer and colder periods. I think it is indisputable a) we're increasing CO2 and b) humans have an impact on global processes (look at the ozone hole for example). Draw your own conclusions from that, but we're at least impacting the Earth system. I thought the plots from UK looked pretty good for North Dakota... I think I'll stick around up here for awhile ;)

Aaron
 
Last edited by a moderator:
Thanks for your response, Aaron.

Actually, that is exactly what the climate models are doing because we don't know what the future state of the climate will be, so we don't know whether they are accurately averaging to climate especially since climate naturally changes over time. And, because we don't verify these models in forecast mode, we have no idea whether they have any skill at short term (let alone long term) climate prediction.

I agree the graphs of CO2 and world temperature show a high correlation with each other, but the graphs I have seen show the temperature rising prior to the CO2 levels which would imply that CO2 is not the cause.

You mention the ozone hole. A great deal of money and inconvenience have been expended trying to fix it. Five years ago, there were various proclamations of "victory" -- chlorofluorocarbons had been banned. Well, it turned out not to be that simple. The ozone hole is worse than ever. See: www.theozonehole.com/ozonehole2006.htm . In the long run, it may turn out that the CFC ban was a good thing and the ozone hole will "heal." But, in view of the worsening ozone hole (in spite of the ban) it is questionable whether it will turn out that banning CFC's will have a favorable benefit to cost ratio or will have had any significant effect on the ozone hole at all.

The worsening of of the ozone hole after spending all of the money and effort to ban CFC's should be a cautionary note for atmospheric scientists in the global warming arena. The sun-earth-atmosphere-ocean system is extraordinary complex. I simply don't buy the idea that unverified models can make accurate 36,000 day forecasts or that we should spend one quadrillion dollars based on them.

Mike
 
Considering CFCs can last for ~100 years in the atmosphere, I wouldn't expect the ozone hole to simply go away after the banning.

oz_hole_area.jpg

I'd say the rate of change has decreased.
 
Mike,

I was a little vague in my last post regarding the natural forcings research -- the research only looked at variations in the sun's brightness. Here's the article on the UCAR website about the study:

“Our results imply that, over the past century, climate change due to human influences must far outweigh the effects of changes in the Sun's brightness,” says Wigley.

...

Brightness variations are the result of changes in the amount of the Sun’s surface covered by dark sunspots and by bright points called faculae. The sunspots act as thermal plugs, diverting heat from the solar surface, while the faculae act as thermal leaks, allowing heat from subsurface layers to escape more readily. During times of high solar activity, both the sunspots and faculae increase, but the effect of the faculae dominates, leading to an overall increase in brightness.
The new study looked at observations of solar brightness since 1978 and at indirect measures before then, in order to assess how sunspots and faculae affect the Sun’s brightness. Data collected from radiometers on U.S. and European spacecraft show that the Sun is about 0.07 percent brighter in years of peak sunspot activity, such as around 2000, than when spots are rare (as they are now, at the low end of the 11-year solar cycle). Variations of this magnitude are too small to have contributed appreciably to the accelerated global warming observed since the mid-1970s, according to the study, and there is no sign of a net increase in brightness over the period.



To assess the period before 1978, the authors used historical records of sunspot activity and examined radioisotopes produced in Earth's atmosphere and recorded in the Greenland and Antarctic ice sheets. During periods of high solar activity, the enhanced solar wind shields Earth from cosmic rays that produce the isotopes, thus giving scientists a record of the activity.


Changes in Solar Brightness Too Weak to Explain Global Warming




Then, there's this one:
Previous efforts to understand the causes of changes in SSTs have focused on temperature changes averaged over very large ocean areas, such as the entire Atlantic or Pacific basins. The new research specifically targets SST changes in much smaller hurricane formation regions.
For the period 1906-2005, the researchers found an 84 percent probability that human-induced factors—primarily an increase in greenhouse gas emissions—account for most of the observed rise in SSTs in the Atlantic and Pacific hurricane formation regions.
"The important conclusion is that the observed SST increases in these hurricane breeding grounds cannot be explained by natural processes alone," says Wigley. "The best explanation for these changes has to include a large human influence."


Human Activities Are Boosting Ocean Temperatures in Areas Where Hurricanes Form, New Study Finds
 
Last edited by a moderator:
From the EPA's web site:

How do we know that natural sources are not responsible for ozone depletion?
While it is true that volcanoes and oceans release large amounts of chlorine, the chlorine from these sources is easily dissolved in water and washes out of the atmosphere in rain. In contrast, CFCs are not broken down in the lower atmosphere and do not dissolve in water. The chlorine in these human-made molecules does reach the stratosphere. Measurements show that the increase in stratospheric chlorine since 1985 matches the amount released from CFCs and other ozone-depleting substances produced and released by human activities.
Back to top.


What is being done about ozone depletion?
In 1978, the use of CFC propellants in spray cans was banned in the U.S. In the 1980s, the Antarctic "ozone hole" appeared and an international science assessment more strongly linked the release of CFCs and ozone depletion. It became evident that a stronger worldwide response was needed. In 1987, the Montreal Protocol was signed and the signatory nations committed themselves to a reduction in the use of CFCs and other ozone-depleting substances.



So, while the amount of CFC in the atmosphere has been going down, the ozone hole has reached record levels.


I don't think the writer of this (among many others) article thought the ozone hole would reach record levels in 2006:

Earth's ozone depletion is finally slowing
30 July 2003
NewScientist.com news service
Gaia Vince


Almost 30 years after it was first reported that pollutants were destroying the Earth's protective ozone layer, there is clear evidence that the global CFC ban has had an impact.

For the first time, it has been shown that the rate of ozone depletion in the upper stratosphere - 35 to 45 kilometres up - is slowing down. "This is the beginning of a recovery of the ozone layer," says Michael Newchurch, at the University of Alabama in Huntsville, who led the new research.


Protecting the earth's ozone layer is a noble goal. And, on balance, banning CFC's may turn out to be a good thing -- the jury is still out considering that most thought the record size of the ozone hole was reached in September, 2000 and it would shrink from there.

All I am saying is that the skeptics (Fred Singer for one) about human effects and solutions regarding the ozone layer have turned out to, at minimum, have had a valid point.

Jeff, the original study that NCAR replied upon came from the Max Planck solar institute in Germany and was released August 2, 2004, and NCAR accurately reported on it in the passage you cite. Interestingly, there is another Planck study released October 28, 2004, that says the sun is burning at its brightest in 8,000 years -- a study that a lot of global warming advocates seem to have ignored. This study says how much of the current warming of the earth is due to the hotter sun is an "open question." And, given that we do not understand the radiative transfer between clouds and the non-cloudy atmosphere, I would say that calling it an "open question" is putting it mildly.

The atmosphere is behaving in some ways that were unexpected five years ago (record ozone hole and cooler oceans). Given that global warming appears to be a far more complex problem than the ozone hole, healthy scientific skepticism seems in order.

That said, this is a great example of a civil and informative scientific discussion.

Mike
 
The sun is the "warmest" in 8,000 years. We both agree that radiative transfer is not well understood. We both agree the ocean interface is not well understood. And, I think we agree (correct me if I am wrong) that if we ran the climate models and asked them to forecast the mean conditions from Day 150 to Day 180 they would probably fail far more often than they would succeed (otherwise, why doesn't NOAA use them in their extended outlooks?).

Given the above, I don't understand the basis for the following:
In the meantime, basic physics suggests anthropogenic climate change is certainly a valid claim, which is decreasingly becoming susceptible to being proved wrong."

Chris

I'm sincerely interested in how you come to the conclusion that anthropogenic climate change is "decreasingly ... susceptible to being proved wrong.

Look forward to your response.

Mike
 
Chris,

Thanks so much for a thoughtful response. You bring up some good points that I will try to answer.

If the sun is the warmest it is in 8000 years, how does this explain the "rapid" increase in global temperature over the last couple of decades?

Quoting from the October, 2004, Planck study: solar activity has remained on a roughly constant (high) level since about 1980. The maximum in solar activity pretty much corresponds to the rise in temperatures as plotted by the IPCC.

Do you think the article I linked above, which offers a pretty compelling explanation, is completely bogus, and then, if so, why?

Very good question. That particular article escaped by attention, so I did a quick read. One of things I noted is that this study, like so many others, relies on what I call "fudging." Starting on p. 1646, they discuss that the model did not handle the Arctic Oscillation well. So, "adjustments" are made. I realize this is commonly done in modeling but is it good science in the context of predictive science?

Remember, we are attempting to validate a forecast of future conditions using the past. If we have to fudge to get the right answers, why do we believe the "unfudged" model makes an accurate forecast 70 or 100 years into the future? That does not scientifically follow in my view.

The fact that a typical climate model wouldn't probably get an ENSO or an MJO (if it can even successfully simulate that ) exactly right in 150 days implies that we should apply such skepticism in our interpretation of the output 150 years out, although as models begin to resolve convection (edit: and improve ocean coupling), these uncertainties decrease. Bingo! We all agree that the models can't get even average conditions right at 150 days so we must be skeptical at 75 or 150 years. I don't doubt the models will get better. When they can get the 20th Century right without fudging and can make consistently accurate 6 month or one year forecasts call me -- I will be intensely interested in the results.

Finally, However, it should be pointed out the British data shows a leveling in world temperatures since about 2000 as opposed to the IPCC's steady rise. Why the difference? Is that difference significant? Maybe.

You have probably seen that the Russian Academy of Sciences recently predicted major global cooling starting in the next few years (starting 2012-2015 and peaking in 2055-2060) and then global warming in the early 21st Century. They compare the magnitude of this cooling to "The Little Ice Age." The Russian Academy adds, The Kyoto initatives to save the planet should be put off until better times... The global temperature maximum has been reached on earth and the temperature will decline to a climatic minimum even without the Kyoto protocols, Ahdusamatov said.

His contention that the global maximum has been reached is consistent with the British data. Bill Gray is also calling for major cooling to begin in the next few years. The recent, and unexpected, ocean cooling be a sign they are corrrect (I have no opinion on this).

For one moment, humor me and lets assume the Russian Academy and Bill Gray are correct and major cooling is imminent. Now superimpose former Vice President Gore's rhetoric about warming being "settled science" and "This is not a political issue. This is a moral issue -- it affects the survival of human civilization," Gore said in an hour-long speech at the New York University School of Law. "Put simply, it is wrong to destroy the habitability of our planet and ruin the prospects of every generation that follows ours."

Now, assuming this major cooling starts 2012 and by 2020 the canals in the Netherlands are frozen (as the Russians predict) and the U.S. suffering through one bone-chilling winter after another. Do you think no one will remember that global warming was "settled science"? Do you think science in general and atmospheric science in particular will have one shred of credibility left? Then, when happens when there is a true atmospheric crisis that we can predict? Will we be the science that cried "wolf"??!

I am older (54) than most of the people who post on StormTrack. I do it because I see a lot of intelligent people and I hope some of my perspective might be beneficial. I readily admit I am a skeptic about long range forecasting models and long range forecasts in general. I was a professional meteorologist in the late 1970's when "global cooling" was settled science.

Let me leave you with two quotes from "Time" magazine...

1974: "Areas of Baffin Island in the Canadian Arctic, for example, were once totally free of any snow in summer; now they are covered year round."

2006: "Late last year, for example, researchers analyzed data from Canadian and European satellites and found that the Greenland ice sheet is not only melting, but doing so faster and faster, with 53 cubic miles draining away into the sea last year alone, compared to 23 cubic miles in 1996."

Do we really want to risk our credibility and literally all the money in the world ($1,000,000,000,000) on the current generation of meteorological models? Is that good science? Is that good policy? Could that money be invested in ways that are more likely to have a positive return?

All of us much reach our own conclusions. I just want those conclusions to be based on sober readings of the best science possible with open eyes based on the limits of contemporary predictive geoscience.

Respectfully,
Mike
 
Chris,

They are not talking about the thermohaline circulation, but changes in solar energy. See below. Have a good weekend.

Mike




Forget Kyoto— global cooling is on the way
[25.08.06 19:31]

Russian scientist predicts global cooling, a cooling of the Earth which could involve glaciation.

Global cooling could develop on Earth in 50 years and have serious consequences before it is replaced by a period of warming in the early 22nd century, a Russian scientist said Friday.


Global cooling - though never widely supported - is a theory postulating an overwhelming cooling of the Earth which could involve glaciation.

"On the basis of our [solar emission] research, we developed a scenario of a global cooling of the Earth's climate by the middle of this century and the beginning of a regular 200-year-long cycle of the climate's global warming at the start of the 22nd century," said the head of the space research sector of the Russian Academy of Sciences' astronomical observatory.

RIA Novosti reports that Khabibullo Abdusamatov said he and his colleagues had concluded that a period of global cooling similar to one seen in the late 17th century - when canals froze in the Netherlands and people had to leave their dwellings in Greenland - could start in 2012-2015 and reach its peak in 2055-2060.

He said he believed the future climate change would have very serious consequences and that authorities should start preparing for them today because "climate cooling is connected with changing temperatures, especially for northern countries."

"The Kyoto initiatives to save the planet from the greenhouse effect should be put off until better times," he said, referring to an international treaty on climate change targeting greenhouse gas emissions.

"The global temperature maximum has been reached on Earth, and Earth's global temperature will decline to a climatic minimum even without the Kyoto protocol," Abdusamatov said.
 
Just how large this influence is, is subject to further investigation. However, it is also clear that since about 1980, while the total solar radiation, its ultraviolet component, and the cosmic ray intensity all exhibit the 11-year solar periodicity, there has otherwise been no significant increase in their values. In contrast, the Earth has warmed up considerably within this time period. This means that the Sun is not the cause of the present global warming.
http://www.maxplanck.de/english/ill...tion/pressReleases/2004/pressRelease20040802/

That Russian claim seems to be at odds with other studies that have been published and reviewed. Even the IPCC took a look at this issue and decided that solar output would not explain the recent warming even if it were to be amplified by a feedback mechanism. Now, It certainly does look like increased solar forcing has played a role but the degree of that role is the question and I think based on what is currently understood, it would not account for the level of warming.

Do we really want to risk our credibility and literally all the money in the world ($1,000,000,000,000)

I have no idea where you came up with that number but I'd be skeptical at any attempt to put a price tag on significantly curbing C02 emissions. Investing in cleaner technologies and better energy sources isn't going to tear the worlds economy apart. So let's say that number is true, tnat number only represents less then half of single years US budget. That's an insignificant number compared to climatic impacts that WILL and ARE resulting as a result of inaction.
 
Last edited by a moderator:
Scott,

As I pointed out in Post #16, the results you quote are from the earlier study. The later study (which, as I pointed out, seems be ignored by global warming proponents) says the effect of the "warmer" sun on the earth's climate the last 20 years is an "open question." To my knowledge, no one questions that the drop in temperature leading to the Little Ice Age was caused by changes in solar energy as was the subsequent warmup (which some believe we are still experiencing). If the sun could cause that much warming for 300+ years, why can't 20 years of the "warmest" sun in the last 8,000 years cause the incremental warming since about 1980?

I suspect that if we currently had the climate of the 17th Century (Little Ice Age) and a major warming was forecast there would be plenty of people who would be fearful of it. The change in climate from the 17th Century to early 21st Century was major -- yet somehow the world not only survived but prospered.

You seem to have a great deal of faith in unverified 70 year model forecasts. You also seem to have a great deal of faith in the IPCC process in spite of resignations from reputable meteorologists (Chris Landsea) and findings of bias (British House of Lords). The British data shows that global temperatures have been flat since 1998. Moreover, there are credible charges of bias in the IPCC's temperature statistics since that time. All of this can be found with just a Google search.

I point this out because of your comment That's an insignificant number compared to climatic impacts that WILL and ARE resulting as a result of inaction. I just don't see that the current uncertainties lead to that conclusion.

Thanks for your thoughts.

Mike
 
I'm no expert on solar climatology, but I do know this: the accurate measurement of solar insolation impinging on the Earth is difficult and only really possible from space in the last twenty years or so. There is no way scientists could say positively that "the sun is the 'warmest' in 8,000 years", if they mean by that solar output.

In fact they can only be drawing inferences from measurements and natural evidence taken on the earth's surface. So the correct assertion is that "the most solar radiation in 8,000 years is reaching the earth's surface".

The amount of insolation reaching the surface is, of course, dependent on solar output; but it's also dependent on photochemical properties of the atmosphere that reflect and/or absorb solar energy before it reaches the surface. Atmospheric reflection and absorbtion are something like two orders of magnitude greater than the alleged solar variability.

Given this, IMHO the Russian hypothesis, if anything, is more cause for concern. It means that for whatever reasons there's more solar energy reaching the earth's surface than before, but those reasons are most likely related to atmospheric effects that may not reverse or may themselves be a consequence of anthropogenic warming. FWIW.
 
David,

Thank you for your thoughts.

You are, of course, correct that they have to use a form of proxy data to estimate solar flux back 8,000 years. When I was in dynamic meteorologgy class, circa 1973, when we were learning the equations of the atmosphere, lower case "c" was the "solar constant." It was thought then that the amount of solar energy reaching the earth was constant on the time scale of decades and centuries. We now know that is not the case.

That said, climatologists have to use proxy data to estimate the earth's climate prior to about 1880-1900. There is a lot of room for error with both types of proxy data.

This is my main point in this thread. A lot of "settled science" and definitive statements about the future state of the atmosphere are thrown about in the global warming debate that seem to have a rather tenuous scientific basis.

The classic scientific method says that hypothesis should be proven though careful experimentation with repeatable results. If you accept that definition of scientific method, please consider the following....

1. The models cannot reproduce known conditions in the 20th Century without "fudging."
2. The models cannot make any kind of reliable forecast at 6 months or 12 months, which, to me, calls into question their ability to accurately forecast at 60 years.
3. We have no theory as to why the sun's energy varies, let alone a method of forecasting those changes.
4. We don't understand many atmospheric and atmospheric-oceanic feedback processes.

Given 1-4, how can we make definitive statements?

5. We want exclude (Kyoto) the two largest polluting nations (China and India, see last week's "Wall Street Journal" article on the level of pollution in those nations) yet somehow control greenhouse gasses at tremendous costs (trillions or even one quadrillion dollars depending on who is doing the estimating). Even the proponents of Kyoto (see 1998 IPPC document) believe it would lower earth's temperature by 0.03degC by 2050. I do not believe, even by 2050, we will be able to detect a change in worldwide temperature of 0.03 deg. C which makes the Kyoto process a scientific challenge to measure (assuming its adopted).

6. Medicine says, "First, do no harm." If the Russians and Dr. Gray are correct that the real danger is cooling due to solar influences and if proponents of global warming are correct (one to one relationship between temperature and greenhouse gasses), are we doing the wrong thing by spending money to eliminate the greenhouse gasses? Are those gasses an "insurance policy" against catastrophic cooling? I don't know. And, no one else does.

If one could clean up the atmosphere for $500,000 and guarantee no negative economic effects it would be a slam dunk decision, regardless of solar forecasts as that would probably be a positive cost-benefit decision. That is not what we are talking about. We are talking HUGE sums of money and potential serious economic disruption.

The CFC industry was an $8 billion (1980 dollars) enterprise prior to the ban so that we could "fix" the ozone hole. People lost their jobs. As of today, it is certainly not clear whether that $8 billion loss and the concurrent economic disruption will be a net positive as the ozone hole is larger than ever (see posts above). Doesn't this suggest we need to be very cautious about the strategy of overturning industries based on atmospheric processes that are not fully understood?

Can we really say, based on today's models, that investment/economic disruption at least 100 times that large will be a net positive? It seems an awfully big bet based on rather flimsy evidence.

Perhaps I have a different perspective, one of business, where we know there is a limit to resources and we have to make cost-benefit-based decisions. It is hard for me to see that either the science or economics tip in favor of making this bet.

Again, thanks for your thoughtful comments. I certainly would like to learn your response to these items.

Mike
 
Mike, there really isn't anything left to say to your points since Jeff, Aaron and Chris have all had excellent responses, I reccomend re-reading those posts because it has already been covered. Anyways, I think we can agree to disagree here.



Scott,

As I pointed out in Post #16, the results you quote are from the earlier study. The later study (which, as I pointed out, seems be ignored by global warming proponents) says the effect of the "warmer" sun on the earth's climate the last 20 years is an "open question." To my knowledge, no one questions that the drop in temperature leading to the Little Ice Age was caused by changes in solar energy as was the subsequent warmup (which some believe we are still experiencing). If the sun could cause that much warming for 300+ years, why can't 20 years of the "warmest" sun in the last 8,000 years cause the incremental warming since about 1980?

I suspect that if we currently had the climate of the 17th Century (Little Ice Age) and a major warming was forecast there would be plenty of people who would be fearful of it. The change in climate from the 17th Century to early 21st Century was major -- yet somehow the world not only survived but prospered.

You seem to have a great deal of faith in unverified 70 year model forecasts. You also seem to have a great deal of faith in the IPCC process in spite of resignations from reputable meteorologists (Chris Landsea) and findings of bias (British House of Lords). The British data shows that global temperatures have been flat since 1998. Moreover, there are credible charges of bias in the IPCC's temperature statistics since that time. All of this can be found with just a Google search.

I point this out because of your comment That's an insignificant number compared to climatic impacts that WILL and ARE resulting as a result of inaction. I just don't see that the current uncertainties lead to that conclusion.

Thanks for your thoughts.

Mike
 
Good discussion! I'm a transportation modeler, which means that I've been involved in the details of "mode choice" models. These models attempt to forecast what transport modes travelers use when they travel, i.e. single-occupant auto, multi-occupant auto, bus, bicycle, light rail, park-and-ride, etc., etc. We do this to evaluate the "cost/benefit" of expensive capital projects and prioritize them. And then there're air quality conformity models.... :rolleyes:

To make matters short, there's not much someone can tell me about the baffling difficulty of projective forecasting. My models parameterize matters of social taste and preference that can be wildly divergent and whose characteristics are opaque to physical law. O.k., there're my creds, FWTW.

Point 1
Model calibration is necessary and unavoidable whenever there's any significant non-deterministic component in the model. Some components need it a lot, while others don't need much or any. Regardless, only in the case of very simple models does the responsible modeler calibrate the components to replicate the objective, i.e. the answer. Rather components are separately calibrated to better replicate measured and (hopefully) well-understood parametric quantities.

Then and only then do you look at the model result vs. Truth. If you get an answer that's sufficiently close to the Truth, you heave a sigh and sneak out of town before an anomaly surfaces. When the inevitable anomaly does surface, you evaluate the model structure theoretically to figure out if there're any components you want to try changing -- remove, add, or re-formulate. Then you calibrate the components again, cross your calloused fingers, and re-validate ad nauseum.

What's done is not "fudging".

Point 2
Basic laws of statistics, especially as derived from the Laws of Large Numbers (e.g. the Central Limit Theorem), are pretty darn well understood in theory and application by those who design models.

It's entirely unsurprising that the long-term behavior of a variant phenomenon is better understood, statistically speaking, than its shorter-term behavior.

Point 3
I think they do indeed have theories regarding variations in Earth-incident solar output and models for forecasting same. Obvious ones are the Milenkovich Cycle, the assymetry of the Earth's orbit itself, and the noted solar magnetic cycle. I'm sure there are some astrophysical ones as well. Might perturbations come up they haven't accounted for? See Point 1 above....

Point 4
No doubt at least somewhat true -- and thank goodness! That's what keeps creative scientists' paychecks coming in and value-added expert contractors such as yourself or myself(?) in business. If it weren't the case then weather forecasts would be built into a Casio watch and all the meteorologists would be scraping by selling chase videos, part-timing at Wal-Mart, or both! ;)

... As Scott and Chris have already said while I've been hacking my way through this post....
 
Back
Top