Greg,
Thanks for your response. Here are my comments...
Mike
These two statements are completely contradictory if one defines "the public at large" as being all people. Do you have a different definition?
Can you address the second part of my post?
Weather hazard information could be produced with very significant detail that contains information about what the forecaster is actually thinking, including uncertainties. Any highly detailed information can be aggregated into simpler and simpler formats to address various levels of user sophistication. So, if there are folks out there that prefer to know more than just a deterministic forecast, why deny that information to them? The key is providing an effective way to communicate uncertainty in forecasts, and right now we don't do a very good job of that. Some of the references you cited have come to similar conclusions. But does this mean we just throw in the towel and never advance?
I can easily agree with this statement, especially regarding tornadoes. Even in the most certain situations, each point within a tornado warning polygon only stands a very small chance of being directly hit by the tornado (much less than 15%). So, the question is, how do we frame the uncertainty information such that we get the appropriate response? This includes members of the "public at large" with different levels of vulnerability, exposure, and response times to the hazard as compared to an "average".
1. Public-at-large means "non-specialized users," sorry my definition was not clear. It is the role of the private sector to tailor products for specialized users. So, NWS products for specialized users, besides being outside the NWS's mission, would be redundant.
2. "that contains information about what the forecaster is actually thinking,"
My comment on that is
the public does not care what the forecaster is thinking. In the late 1980's and early 1990's I worked with an industrial psychologist and we learned that the vast majority of meteorologists, like engineers, have
intrinsic personalities and professional outlooks. That tends to make us want to give our "customers" what we think they should have rather than what they want. Based on the work I did at that time plus my day-to-day work with customers, I just don't see any demand "to know what the forecaster is thinking." From a practical standpoint, it would increase WFO workload with little to no benefit.
3. "But does this mean we just throw in the towel and never advance?" Of course not. There is plenty of room to improve the
accuracy of weather forecasts which is where I believe the NWS should place its limited resources rather than spending this effort to crank our imperfect accuracy out in more and more complicated terms (as viewed from the public's point of view). Given the difficulty the public has with understanding daily PoP's, the idea they will understand probabilities of (rare) extreme events just doesn't add up to me. An example: The 2009 Christmas Eve blizzard in Oklahoma was a rare, and record, event. Given that the last blizzard of that magnitude in Oklahoma was in 1971 (and not even in the same part of Oklahoma), the idea that people will be able to reach back 30+ years and think, "Hmm, the probably on February 20, 1971, was 30% and today [December 23, 2009, in this example] is 40% means I should be ready for something extraordinary," just doesn't compute (at least to me).
You might reply, "We can educate them." If people don't understand rain PoP's after 40 years (with rain being far more frequent than blizzards and tornadoes), the chances of educating, and more importantly,
calibrating the public (i.e., 15% is a very high tornado probability) is remote. It isn't worth the risk of confusion when we could put those resources into making the forecast and warning more
accurate.
Very interesting exchange from two different points of view. Thanks again, Greg.
Mike