• While Stormtrack has discontinued its hosting of SpotterNetwork support on the forums, keep in mind that support for SpotterNetwork issues is available by emailing [email protected].

The Utility of Winter Storm Forecasts

  • Thread starter Thread starter Mike Smith
  • Start date Start date
4 - 9 inches? Are you sure that was their forecast? If so - that's pretty useless. 4 inches is a bit of a nuisance. 9 inches is a major event.

I suppose it's safer to forecast "2 to 12 inches" if you want a perfect forecast, but does that offer value?

When I checked the NWS had 4-8" forecasted (in the warning text) and Channel13 said they were expecting 4-9" so I assumed it was taken from the NWS. This forecast was for central Iowa.
 
When I checked the NWS had 4-8" forecasted (in the warning text)

I see. That's a general overview for the entire WSW area. The actual forecast from Wednesday morning for Des Moines said:

TOTAL SNOW ACCUMULATION 8 TO 11 INCHES.
That's a little more realistic of a range. Looks like DSM's official total for the storm was 5.7 inches.

Channel13 said they were expecting 4-9" so I assumed it was taken from the NWS.
Not likely. That would probably be Channel 13's forecast -- TV meteorologists make their own which don't always match with NWS.
 
Last edited:
Well I never thought about it that way. I just assumed all TV stations used the NWS for their forecasts but I know now that is not true now. Thanks for letting me know.
 
I also think people tend to look at "worst case scenario" when they see a forecast. If the forecast calls for 8-12 inches, and you get 7 inches... that's only an error of one inch, but I think many would see it as "they said we were going to get 12 inches, and we only got 7!"

I've noticed that too. Another thing is that when rain is predicted most people don't break out their rain gauge and check the amount. If the ground is wet, the forecast verified to most people. When it snows, the amount is right there for everyone to scrutinize.
 
Perhaps if the winter forecast information was given as a distribution of values and their expected probability of occurrence, then we might get more insight into the forecaster's thought process, and allow users make more informed preparedness decisions. Giving a determinstic forecast (e.g., 3-4") that verifies out side that range may seem as a bust to some. But a probabilistic forecast showing the range of forecast values and associate probabilities could be much more useful (e.g., 0-1" 10%, 1-2" 20%, 2-3" 30%, 3-4" 40%, 4-5" 30%, 5-6" 20%, and so on).

As far as impacts, snow amount isn't the only factor. We just got through a blizzard that dumped what some might consider a meager 7" of snow on Norman, but combined with the wind and low temperatures, was allowed to drift (and re-drift after clearing) to 2-3' over roads in spots. That same 7" without wind probably would have been a lot more manageable from a road maintenance standpoint and a much lesser impact to local transportation. One much also consider the temperature of the road surfaces and other variables when assessing impacts.

A lot of current research is being rpesented in BAMS about this. It seems most people (even the general public) would like to see some sort of probabilistic value to precipitation forecasts.
 
This type of forecast is unlikely to be adopted by radio and television as it is too difficult for the general public to understand. Imagine listening to this list of numbers on the radio.
Where's your social science research data supporting the assumption that the "general public" (however that is defined) find it too difficult to understand probabilities?

This kind of detailed information isn't easily adaptable to radio - that's a no-brainer. But that doesn't mean it couldn't be made available somewhere. For example, you aren't providing your probabilistic forecasts to your clients via radio. Also, let me assume that your clients understand your probabilistic snow forecasts because you have explained how to interpret them. Assuming you have done the above social science research and have proven your statement, then why couldn't the "general public" (however that is defined) have access to similar interpretive materials in order to understand this type of forecast information and remove those barriers?
 
one thing that i am constantly reminding my friends, who have no weather knowledge is, that meteorologists aren't just guessing out of thin air what they think the weather will be. they are going off of numerous computer models, and model forecasts. sure there is obviously a large amount of interpretation of data, etc, that is not computer based, but if 5 computer models all paint a quarter inch of qpf, so the meteorologists say that's how much snowfall will happen and then there's only .10" of qpf, they get blamed for it when all the computers were showing the same thing. obviously when it comes to snowfall that's a huge difference in amounts, not so much if it was just rain!
another instance, tonight in omaha we were supposed to be -23, but at 230 in the morning its still at 0 degrees bc of clouds that haven't eroded away, like all the models said would happen. people will prob be complaining tomorrow about it not getting as cold as it was supposed to, like that's a bad thing!!
 
Where's your social science research data supporting the assumption that the "general public" (however that is defined) find it too difficult to understand probabilities?

Greg,

The literature is filled with studies that show people don't understand the PoP's. Take a look at this:

One of the major objectives of this project was to probe the users in regard to interpretation and understanding of the terminology used in NWS forecasts, including the probability of precipitation (POP) statement. Many other researchers (Sink 1993; Vislocky et al. 1995; Last and Skowronski 1990; Shaefer and Livingston 1988; Murphy and Brown 1983b; Murphy et al. 1980) have underscored both the positive and negative aspects of POP forecasts.

The idea behind the usage of a percentage to describe the probability of precipitation is a sound one. It should be logical to almost everyone that a 40% chance of rain is much less than an 80% chance. This seems to be a very concise way to describe the inherent uncertainty surrounding any precipitation forecast. Yet it is quite apparent that often a gap exists between the forecaster and the user as to the interpretation of the prediction. While some have argued that the general public does not understand probability, Murphy et al. (1980) found that the primary source of misunderstanding is caused by confusion about the specific event corresponding to the probability and not by a lack of comprehension of the definition of probability itself. The perpetual misinterpretation of our forecasts by the public indicates a persistent problem in forecast wording.
Source: http://pajk.arh.noaa.gov/info/articles/survey/poptext.htm

Where is your research that shows the general public wants probabilistic winter storm warnings? Probabilities are being provided to companies and governmental DOT's (the few that want them) via the private sector. That said, relatively few desire probabilities as you and I have previously discussed.

Rain is a relatively frequent event, yet there is still great confusion over the exact definition of the PoP as shown above. Blizzards and tornadoes are rare, life threatening events. The idea that people will understand that a 15% probability is "high" and take precautions is far-fetched.

The NWS cannot be "all things to all people." Its mission is to serve the public at large. I suspect the OUN NWS was plenty busy on Christmas Eve without having to generate probabilistic forecasts. I doubt the thousands of people of people trying to decide how to handle the blizzard at their homes or on the road wanted to take the time to figure out "probabilities." They want direct, actionable information, especially when a life-threatening situation presents itself.

Mike
 
Looking at snowfall totals from the event that triggered this thread, I would say the warnings and advisories verified extremely well. Parts of NE IL and SE WI seen 10-12 inch amounts, with isolated higher totals.

GRR (advisory for 3-6 inches): http://www.crh.noaa.gov/news/display_cmsstory.php?wfo=grr&storyid=46108&source=0
DTX (advisory for 3-5, upgraded to 4-6 inches): http://www.crh.noaa.gov/dtx/display_event.php?file=snow201001081848
MKX (winter storm warning for eastern counties for 6-12 inches & locally higher): http://www.crh.noaa.gov/news/display_cmsstory.php?wfo=mkx&storyid=46060&source=0
Regional map from ARX: http://www.crh.noaa.gov/images/arx/jan72010/Jan82010_12Z48snowfallRegional.png
 
The literature is filled with studies that show people don't understand the PoP's.
As well as this study that shows that some people want probabilistic information:

Communicating Uncertainty in Weather Forecasts: A Survey of the U.S. Public.

The NWS cannot be "all things to all people." Its mission is to serve the public at large.
These two statements are completely contradictory if one defines "the public at large" as being all people. Do you have a different definition?

Can you address the second part of my post?

Greg Stumpf said:
...why couldn't the "general public" (however that is defined) have access to similar interpretive materials in order to understand this type of forecast information and remove those barriers?

Weather hazard information could be produced with very significant detail that contains information about what the forecaster is actually thinking, including uncertainties. Any highly detailed information can be aggregated into simpler and simpler formats to address various levels of user sophistication. So, if there are folks out there that prefer to know more than just a deterministic forecast, why deny that information to them? The key is providing an effective way to communicate uncertainty in forecasts, and right now we don't do a very good job of that. Some of the references you cited have come to similar conclusions. But does this mean we just throw in the towel and never advance?

Blizzards and tornadoes are rare, life threatening events. The idea that people will understand that a 15% probability is "high" and take precautions is far-fetched.
I can easily agree with this statement, especially regarding tornadoes. Even in the most certain situations, each point within a tornado warning polygon only stands a very small chance of being directly hit by the tornado (much less than 15%). So, the question is, how do we frame the uncertainty information such that we get the appropriate response? This includes members of the "public at large" with different levels of vulnerability, exposure, and response times to the hazard as compared to an "average".
 
Greg,
Thanks for your response. Here are my comments...
Mike


These two statements are completely contradictory if one defines "the public at large" as being all people. Do you have a different definition?

Can you address the second part of my post?

Weather hazard information could be produced with very significant detail that contains information about what the forecaster is actually thinking, including uncertainties. Any highly detailed information can be aggregated into simpler and simpler formats to address various levels of user sophistication. So, if there are folks out there that prefer to know more than just a deterministic forecast, why deny that information to them? The key is providing an effective way to communicate uncertainty in forecasts, and right now we don't do a very good job of that. Some of the references you cited have come to similar conclusions. But does this mean we just throw in the towel and never advance?

I can easily agree with this statement, especially regarding tornadoes. Even in the most certain situations, each point within a tornado warning polygon only stands a very small chance of being directly hit by the tornado (much less than 15%). So, the question is, how do we frame the uncertainty information such that we get the appropriate response? This includes members of the "public at large" with different levels of vulnerability, exposure, and response times to the hazard as compared to an "average".

1. Public-at-large means "non-specialized users," sorry my definition was not clear. It is the role of the private sector to tailor products for specialized users. So, NWS products for specialized users, besides being outside the NWS's mission, would be redundant.

2. "that contains information about what the forecaster is actually thinking,"
My comment on that is the public does not care what the forecaster is thinking. In the late 1980's and early 1990's I worked with an industrial psychologist and we learned that the vast majority of meteorologists, like engineers, have intrinsic personalities and professional outlooks. That tends to make us want to give our "customers" what we think they should have rather than what they want. Based on the work I did at that time plus my day-to-day work with customers, I just don't see any demand "to know what the forecaster is thinking." From a practical standpoint, it would increase WFO workload with little to no benefit.

3. "But does this mean we just throw in the towel and never advance?" Of course not. There is plenty of room to improve the accuracy of weather forecasts which is where I believe the NWS should place its limited resources rather than spending this effort to crank our imperfect accuracy out in more and more complicated terms (as viewed from the public's point of view). Given the difficulty the public has with understanding daily PoP's, the idea they will understand probabilities of (rare) extreme events just doesn't add up to me. An example: The 2009 Christmas Eve blizzard in Oklahoma was a rare, and record, event. Given that the last blizzard of that magnitude in Oklahoma was in 1971 (and not even in the same part of Oklahoma), the idea that people will be able to reach back 30+ years and think, "Hmm, the probably on February 20, 1971, was 30% and today [December 23, 2009, in this example] is 40% means I should be ready for something extraordinary," just doesn't compute (at least to me).

You might reply, "We can educate them." If people don't understand rain PoP's after 40 years (with rain being far more frequent than blizzards and tornadoes), the chances of educating, and more importantly, calibrating the public (i.e., 15% is a very high tornado probability) is remote. It isn't worth the risk of confusion when we could put those resources into making the forecast and warning more accurate.

Very interesting exchange from two different points of view. Thanks again, Greg.

Mike
 
Last edited by a moderator:
1. Public-at-large means "non-specialized users," sorry my definition was not clear. It is the role of the private sector to tailor products for specialized users. So, NWS products for specialized users, besides being outside the NWS's mission, would be redundant.
Interesting take on this. I would submit that every user has specific (or "specialized") and unique needs from weather forecast information, and those needs vary at different locations, times, and situations. Everyone's vulnerability to weather hazards is variable, and thus a specialized forecast might be required in each situation.

2. "that contains information about what the forecaster is actually thinking,"
My comment on that is the public does not care what the forecaster is thinking. In the late 1980's and early 1990's I worked with an industrial psychologist and we learned that the vast majority of meteorologists, like engineers, have intrinsic personalities and professional outlooks. That tends to make us want to give our "customers" what we think they should have rather than what they want.
Twenty years ago, when you did that study, there was no easy online access to weather information like there is today. Back then, it was rare that anyone outside of meteorological circles would demand access to live radar data, forecast discussions (especially SPC outlooks) and other information. That has changed today, and I suspect this will continue to evolve.

As for the "what they want" versus the "what we think they want" argument, I quote Henry Ford: “If I had asked people what they wanted, they would have said faster horses.” I use this quote to foster discussions on how we can effectively balance requirements and innovation. Henry Ford may have taken the opposite extreme, but the middle ground is where we should be thinking. For example, you might find in a customer survey that the majority of respondents prefer to have perfectly accurate forecasts. This is clearly an impossibility (especially in our lifetimes), so further research with your customers might reveal that if they know they can't get perfection, that forecasts couched in uncertainty (and not necessarily probability numbers) might be more useful to them versus deterministic forecasts which can sometimes be wrong.

3. "But does this mean we just throw in the towel and never advance?" Of course not. There is plenty of room to improve the accuracy of weather forecasts which is where I believe the NWS should place its limited resources rather than spending this effort to crank our imperfect accuracy out in more and more complicated terms (as viewed from the public's point of view).
I agree there is room to improve forecast accuracy. But what does that really mean? How does one measure forecast goodness? See Murphy's article for some answers on that. As accuracy improves, uncertainty in forecasts decreases. I similar argument can be had with lead time. We can arbitrarily increase lead time today if we wanted to, but at what cost? More uncertainty. In the future, new technologies and concepts of operations and services (e.g., Warn On Forecast) will reduce uncertainty at longer-term forecast periods, not really "increase lead times".

An example: The 2009 Christmas Eve blizzard in Oklahoma was a rare, and record, event. Given that the last blizzard of that magnitude in Oklahoma was in 1971 (and not even in the same part of Oklahoma), the idea that people will be able to reach back 30+ years and think, "Hmm, the probably on February 20, 1971, was 30% and today [December 23, 2009, in this example] is 40% means I should be ready for something extraordinary," just doesn't compute (at least to me).
Actually, some social scientists feel that comparisons to historic events might be a worthy method of having folks understand the present threats. But not necessarily using probablity numbers. The uncertainty can be couched in different ways. For example "Remember the 1971 storm? There is a high likelihood that this storm will be just as bad." It provides a context for the threat, and uncertainty is expressed.

You might reply, "We can educate them." If people don't understand rain PoP's after 40 years (with rain being far more frequent than blizzards and tornadoes), the chances of educating, and more importantly, calibrating the public (i.e., 15% is a very high tornado probability) is remote. It isn't worth the risk of confusion when we could put those resources into making the forecast and warning more accurate.
I think this argument could be extended to any current user of uncertainty information, such as your probabilistic winter weather forecast clients. Somehow they understand your numbers. Why?

Very interesting exchange from two different points of view. Thanks again, Greg.
Agreed!
 
Mike,

I am almost 100% positive there is an article in BAMS within the last 6 months about just this issue. I don't have the hard copy of it with me in my office, but I will certainly look through them when I get home to find it and show you.
 
But a probabilistic forecast showing the range of forecast values and associate probabilities could be much more useful (e.g., 0-1" 10%, 1-2" 20%, 2-3" 30%, 3-4" 40%, 4-5" 30%, 5-6" 20%, and so on).

On the topic of probabilistic forecasts for winter weather, DTX is doing something similar...

http://www.crh.noaa.gov/dtx/ProbSnow.php

Admittedly, this would be better to look at when there is actually a chance of snow in the forecast, but this does provide some useful information...assuming the users know how to respond to the percentages.
 
Back
Top