Your interpretation of 'probability of precipitation'

Jeff Duda

site owner, PhD
Staff member
Site owner
Supporter
Joined
Oct 7, 2008
Messages
3,723
Location
Denver, CO
The Sacramento NWS office recently tweeted an image showing a probabilistic forecast of precipitation in their area. You can see it below. Understand that under "likely range", the 71 inches should be 0.71 inches. I'm sure that was a typo.

f0aa0eddf71d05a6ca28654a08577cb2.png

There is currently research going on as to how common folks interpret probabilistic weather forecasts. The term "probability of precipitation" (PoP) has been used for decades, but apparently many people still fail to grasp the meaning of it. That is understandable in some ways, as different entities within the weather enterprise (e.g., NWS forecast offices, broadcast media, and private companies) all use similar, but slightly different definitions. NWS STO's product shown above is probably an attempt to improve the communication of forecast certainty by offering more detailed probabilistic forecasts. But is this a good way of doing it? I ask for your response.

I would like to open a discussion about both this graphic and of the nature of how Stormtrack users interpret probabilistic forecasts. Please reply to this thread with your interpretation of the above graphic, and what you think it means. Please elaborate by disclosing how you feel about probabilistic forecasts. If you don't have much experience looking at generic probabilistic forecasts, you can start with just PoPs. The SPC convective outlooks, CPC medium-range to seasonal temperature and precipitation outlooks, and WPC probabilistic quantitative precipitation forecasts (PQPF) are also helpful examples of probabilistic forecasts.

ADD: please comment on the following question. Does this way of communicating the forecast probability of rain add any value to you when making a decision that requires some knowledge of if it might rain? Said differently, which forecast would you prefer: the one given in the image, or one that just says there is a 70% chance of rain, and if it rains at all, the rain total will be 0.53".

I am not involved in any way with NOAA/NWS projects studying human interpretation of probabilistic forecasts, but I am very curious to hear what others think about them.

I thank you in advance for your responses.
 
Last edited:
A) for the LOVE OF GOD, DO NOT PREDICT PRECIPITATION TO THE NEAREST HUNDREDTH OF AN INCH!

Now that that's off my chest :)

Great concept. But B) they need to round the percentages off to something slightly easier to "interpret"(?) like 75% / 50% / 5% in this case.
 
Well one, when you forget your decimal, it shows up at 71" :)

And two - we have zero skill at predicting .71" versus .70" versus .69", but when you use two decimal places the precision can imply accuracy. It's like saying tomorrow we're getting 8.4" of snow... Bad call.
 
What's your reasoning for this opinion?
I agree with rdale, but for a slightly different reason. Can one really accurately predict precipitation amounts to the nearest hundredth of an inch? What's the point when the overall range of precipitation is .26-1.01? Maybe you have an answer for this, Jeff. Also, how do you justify 76% over 75%? Or are these numbers computer generated?

I interpret a 75% PoP as that forecast would verify rain on that day 3 out of 4 times. Really hard to put a strict definition on the probabilities, but is there really a point? Until we can accurately verify these forecasts without some large range of probabilities and quantity, I don't see the point in saying there will be a 25.25% chance of 1.0234" of rain tomorrow at some area near you at some time within this range, when it likely won't be accurate within more than one significant figure. Adding extra precision gives the impression that the data is actually more accurate than it really is. Maybe you can prove me wrong on this, but just trying to add some insight to your question.
 
Last edited:
I know how the NWS wants you to interpret their probabilities, so I won't mess up your sampling by giving my answer. :) Though I'll maybe critique the graphic a bit.

In addition to what Rob said and what Jimmy Correia said on Twitter (essentially, "What part of the forecast does 'Forecast Confidence' refer to?"), I have a couple of other items, one of which is really nitpick-y. When they say "Chance of 0.25 inches," what they really mean is "Chance of at least 0.25 inches." My guess is that's how most people will interpret it, but it's good to be explicit about these things. From a strict statistical point of view, the chance of receiving exactly 0.25" (as opposed to 0.25000000001" or 0.25000000002, etc.) is 0. However, because we can only measure precipitation in 0.01" increments and for other reasons, this is different from the chance of measuring 0.25 inches (probably a small, but finite number).

Also, in addition to the "High-end scenario", I'd put a "Low-end scenario." It's maybe not crucial in this scenario, but I could see it being useful in a higher-impact event.
 
To those who have responded so far:

Does the forecast graphic add any value to a plain old fashioned forecast (see the added comment in the OP for reference)?
Do you think the information given is detrimental to your ability to make a decision that depends on precipitation?
Do you think the typos and missed theory in the graphic is just too much to overcome, and the graphic, even if judged on its intent, is worthless?
 
1) For the public - no. Is there any impact difference between 1/4" and 1"? Not in my hood. If there is a difference in theirs - spell it out if so.
2) Detrimental? No. If 1/4" is a threshold for you (hosting a concert with rain insurance) then you know it's likely you will cash in that policy.
3) Depends on who the user is and what #1/#2 mean to that user.
 
I just prefer seeing a percent chance of precip. If you see a forecast of a 90-100 percent chance of precip, you know that they aren't talking about isolated supercells in the area. When I see rainfall totals of something like .63", I really don't know what to think.
 
Hey Jeff,

To begin with, I interpret say a POP of 70% as meaning that on 7 days out of 10, it will rain at any given location within the forecast region, and on 3 it will not. So it follows that on the above graphic, it will rain at least 0.25" on 7.6 out of 10 days given the current environment.

I am a little surprised that the forecast office would use such precise measurements and POPs. It's a strange mix of being "uncertain of the certain," ie. "we're not certain it will rain, but if it does, it should rain precisely this much." And while this may be a misinterpretation of the intended message, I'm pretty sure this is how it would be interpreted by many lay people. It also appears that with increasing confidence of lower amounts, it could be interpreted that the POP should be almost 100% for 0.01" of rain on this day, (which could be important info for the public). In any case, considering the very localized nature of precip (especially of convective precip), this seems a little silly.

Maybe this a theme for this specific region, which is very important agriculturally, and where rain and/or drought will therefore have a huge impact on production? Maybe here people ARE more concerned about measurable amounts?
In any event, elsewhere, and for a public product, I think such precision is irrelevant. Most folks just wanna know if they should wear their gumboots and pack an umbrella or not. Not to mention of course, that the more precise your forecast, the more likely it won't verify :)

The way that I'm used to reading forecasts (from Environment Canada) is the POP is given when there are chances of precip between 30-70%. Above 80%, it is worded in such a way that precipitation is expected, ie "Becoming cloudy this afternoon, with a few showers and thunderstorms late this afternoon." Precipitation amounts are only included when they are "appreciable" and "likely" (AOA 80% chance), beginning at 5mm. "Today, showers. Amount 5-10mm."
 
I do think it adds value, since the POP forecasts now commonly-used just tell you the probability of measurable precipitation (i.e. .01" or greater). Knowing the probability of a quarter, half-inch, or inch tells you more in terms of likely impact on outdoor events or need to water your plants. So I think the general approach makes sense and adds value, though I agree with several of the comments above, such as the usefulness or rounding to avoid conveying more precision than is realistic in a forecast.
 
Correctly used, for a location, a 70% chance of precip means that on similar days to this, when 70% POP was given, it will rain on 7 out of 10 days.
Probability is a good way to express uncertainty but the public must be on board. Now, people *seem* to understand the concept of odds when placing bets, for example, on horse racing. But it's a hard slog to convince some people this is the best method - they think it's sitting on the fence. Part of this mindset is because in the past forecasts have often been given as certainties rather than that the most likely outcome.
 
Correctly used, for a location, a 70% chance of precip means that on similar days to this, when 70% POP was given, it will rain on 7 out of 10 days.

Ehh, your version of "correctly used" depends on the person issuing the forecast and the context. If that definition was global - then at least one time out of the 20 that SPC issues a 5% SWOMCD we should get a watch. I've _never ever ever_ seen a watch issued after a 5% discussion.
 
Correctly used, for a location, a 70% chance of precip means that on similar days to this, when 70% POP was given, it will rain on 7 out of 10 days.

I have to agree with Rob's response to this. I think this definition is insufficient because the word "similar" is totally ambiguous. What defines a similar day to the forecast? Similar means there exist differences, and NWP models today are more than good enough to distinguish between days that may look similar on, for example, the synoptic scale, but that still differ at that scale and smaller scales. In those cases, your model forecast system is almost certainly going to output different PoPs.

Probability is a good way to express uncertainty

This is the definition I use. I think it's the least ambiguous definition.

they think it's sitting on the fence. Part of this mindset is because in the past forecasts have often been given as certainties rather than that the most likely outcome.

I think you're hitting the nail on the head with this statement. For some reason, it seems that people want to hear a forecast of absolute certainty, even if it's wrong, rather than to hear probabilities, which gives the end user much more freedom to make their own decisions. I think especially broadcasters, but any forecaster in general, needs to get away from that type of attitude or language regarding forecasts. For example, "It will rain"/"It will be dry" on a day with a PoP of 60% is not good language.
 
Last edited:
Ehh, your version of "correctly used" depends on the person issuing the forecast and the context. If that definition was global - then at least one time out of the 20 that SPC issues a 5% SWOMCD we should get a watch. I've _never ever ever_ seen a watch issued after a 5% discussion.

For precipitation at a location, I think this is pretty much what is typically understood. You're citing a risk of severe storms rather than for precipitation. I'm not sure you're comparing eggs with eggs here.
 
Back
Top