"Too Much" Lead Time?

  • Thread starter Thread starter Mike Smith
  • Start date Start date
I think you can get the current "threshold" by looking at the FAR. 25% of TOR warnings verify, so the "simple version" would be to say that if a forecaster sees a 25% of a severe storm having a tornado, he issues a warning.

Yeah, I get that. But I have to wonder how a sample of individual forecasters' explicit probability forecasts would actually compare to 1 minus the FAR.
 
Just curious, then. Based on your research, what is the typical probability threshold for a warning under the current system? I would guess it is subjective to the individual forecaster, but wouldn't this same subjectivity carry over to a probability-based system? Just labeling a scheme "probablisitic" vs. "deterministic" doesn't automatically make it superior. If a probability distribution is involved, then do we have any idea at all what the confidence interval is?
The simple answer: it varies, and that's what is concerning. When you see a Tornado Warning today, you have no idea what that forecaster's threshold was. And when you don't see a Tornado Warning on a storm, you have no idea if there is a forecaster with a much higher mental threshold for uncertainty.

So, we need to approach the problem from multiple angles. Obviously there is the societal angle which we've been discussing at length here. But there is also the scientific angle. In order for probabilistic forecasts to be meaningful, they have to be reliable. And not the dictionary definition of "reliable" but the statistical definition:

http://en.wikipedia.org/wiki/Brier_score#Reliability

and the forecasters need to be calibrated - meaning that their forecast reliability should be known to them (self-verification is essential), and as they become calibrated, they become more confident.

But how would we start asking forecasters to think in terms of numbers? There has to be a basis for this. For starters, in the HWT, we simply asked the forecasters what their mental threshold for issuing a warning was, and to consider "normalizing that" to a reference threshold. We then asked them to consider warnings below that threshold as "pre-warnings" both in space and time, the latter relating to longer lead time.

But this is still insufficient, and there needs to be guidance, like there is today for probabilistic precipitation forecasts, temperature forecasts, etc. This guidance can be a mix of statistical climatological information (coupled with other sensor parameters, like radar) and probabilistic numerical guidance from ensemble models (e.g., the WoF models).

BTW - the usual disclaimer: this concept is not a "replacement" of the current warning system, but the addition of information and detail. Users aren't going to start seeing their warnings replaced with numbers.
 
As of the current time, 12 tornadoes have been confirmed on December 31 in the St. Louis CWA. I haven't gone back and counted the warnings, but I am quite sure this works out to be a lower FAR than the 75 percent FAR rate cited above. The tornadoes occurred in a wide variety of locations and so many of them were certainly in different warning polygons. Admittedly some of the polygons were quite large because of the linear nature of the storms in the STL area. So the bottom line is that the FAR was probably lower than ususal in the STL area on December 31. However, in addition to the points about sirens and weather radio alarms that I made in my earlier post, I think that the point made above about it seeming like a false alarm at any house that did not get hit is a very good one. I am sure, for example, that many people in St. Louis city and county whose houses were not hit experienced the warnings as a false alarm. Thus, people may say it was a false alarm when, from the standpoint of a tornado occurring within the warning polygon, it was not.
 
The simple answer: it varies, and that's what is concerning. When you see a Tornado Warning today, you have no idea what that forecaster's threshold was. And when you don't see a Tornado Warning on a storm, you have no idea if there is a forecaster with a much higher mental threshold for uncertainty.

So, we need to approach the problem from multiple angles.

Is there a "problem" in the real world? We have already cut the tornado death rate by 95%! The death rate will never be zero (i.e., in Greensburg, the tornado was so strong 8 people were killed in shelter, including in their basements). It is hard for me to believe that probabilistic warnings, and the education they will require, will save lives in significant numbers when compared to other advancements, i.e., directly improving accuracy within the current warning system and more/better data.

Note that I keep referring to "finite resources" in some of these posts.

Greg, it occurs to me you probably aren't aware that the NWS has such severe bandwidth problems that they cannot add any significant data to their external user feeds. We cannot get anywhere near the full suite of TDWR data (ask Tim Crum) and we will not get the full suite of D-P -88D products. Where is the bandwidth for all of these probabilistic forecasts going to come from?

In the case of the STL NYE tornadoes, the -88D hiccuped twice and failed to provide data over the strongest tornado of the day. Fortunately, we had the 5 minute interval TDWR data which clearly showed the strengthening tornado. With that data we were able to instantly serve our clients. A TV meteorologist could have instantly informed his or her viewers of the danger.

In a tornado situation, I'll take more data over forecasts any time -- and we cannot get more TDWR data, even though it is available.
 
Where is the bandwidth for all of these probabilistic forecasts going to come from?

In general, I would think a WoF grid would be much much smaller than a DP or TDWR product. Invalid comparison.

A TV meteorologist could have instantly informed his or her viewers of the danger.

I would venture to say that a very small percentage of TV meteorologists use TDWR data. Much smaller than the "TV weathercasting" field that would benefit by the availability of "confidence plots."
 
Rob,

The WoF grid, supposedly produced every 5 minutes, by every WFO, at half hour intervals out to six hours (one proposal I reviewed earlier this week) for prob. TOR, SIG TOR, hail, SIG hail, SVR wind and SIG SVR wind for -- literally -- hundreds of geogridded locations in each CWA is most certainly a non-trivial amount of bandwidth.

If most TV mets are not using TDWR they are making a big mistake. Its velocity fields are of far better quality than the -88D's. As previously mentioned, here is the Lambert TDWR when the WSR-88D hiccuped on New Year's Eve:
Picture+67.png

The -88D provided no data on the southern (Fenton-Sunset Hills tornado EF-3) tornado and didn't show the northern (Ballwin, EF-1) tornado at all! There is more on my blog about this situation at: http://meteorologicalmusings.blogspot.com/2010/12/how-do-we-track-tornadoes.html

These tornadoes had a forward speed of 50 mph. So, the lack of data on the Fenton tornado meant the tornado would have traveled 8 miles (4.2 min. X 2 X 50 mph) across a densely populated area in Doppler radar "darkness" absent the TDWR. If I had to chose (and, given the severe bandwidth problems, we may be faced with this choice), I'd take more TDWR (and D-P -88D) over WoF any time.

That said, my objections to WoF lessen significantly if the bandwidth problems are solved and all of the data is available on a timely basis.

Mike
 
If most TV mets are not using TDWR they are making a big mistake.

I'll even back up a step... Most TV weatherpeople probably have never heard of TDWR let alone use it. And I'd say a good 95% will never study up enough on DP to be able to utilize that either, and the way the career field is moving that number will only get higher with time - not lower. That's why I think the "confidence level" will be 1) great for TV users and 2) really really great for those who never watch TV news (a number that will only get lower with time.)
 
I'll even back up a step... Most TV weatherpeople probably have never heard of TDWR let alone use it. And I'd say a good 95% will never study up enough on DP to be able to utilize that either, and the way the career field is moving that number will only get higher with time - not lower. That's why I think the "confidence level" will be 1) great for TV users and 2) really really great for those who never watch TV news (a number that will only get lower with time.)

What way is that career field moving? I would've actually thought that percentage should decrease with time since more TV mets are degreed meteorologists than in years past.
 
The field as a whole... Whether or not the radio of "degreed" to "non-degreed / certified" changes, the number of openings will certainly decrease as pay levels decrease and the media platform changes. In addition, I think anyone in the industry today would say that they are being given more responsibilities to do and less time to do it, with their webpage and FB and Twitter and school visits and you name it. That fourth met who can help during sevwx no longer exists, or is in the field getting video as opposed to looking at a mesoscale analysis and investigating L2 data, so it's a solo job of getting warnings on the air and panning the sevwx map. Asking the viewers to stand by for a few minutes while he checks KDP is just not an option.

I'd be surprised if vendors even carry that stuff, I remember a few years back when WSI developed Titan (yuck) and only provided CR data (not even BREF) because most TV mets didn't need BREF or even know the difference. They just wanted the attributes table.
 
Is there a "problem" in the real world? We have already cut the tornado death rate by 95%!
Admirable! But tornadoes (and other weather hazards) don't only cause deaths. They cause injuries, loss of productivity, loss of property, impact mental health, etc. These statistics are rarely if ever measured.

Greg, it occurs to me you probably aren't aware that the NWS has such severe bandwidth problems
I am very aware of today's bandwidth concerns. Recall my earlier comment:

me said:
We make a point in our various workshops when discussing the future of government warning and product services to avoid constraining your discussion by current technology and policy limitations. Those are evolving, or could be made to evolve in response to any proposed new capabilities!

The WoF grid, supposedly produced every 5 minutes, by every WFO, at half hour intervals out to six hours (one proposal I reviewed earlier this week) for prob. TOR, SIG TOR, hail, SIG hail, SVR wind and SIG SVR wind for -- literally -- hundreds of geogridded locations in each CWA is most certainly a non-trivial amount of bandwidth.
Since hazardous convective weather usually covers smaller areas, the grids will be sparse and easier to compress. For longer time scales, the grid resolution will probably be less as well. But same caveat as above - don't expect these grids any time real soon! Perhaps 10 years?
 
The WoF grid, supposedly produced every 5 minutes, by every WFO, at half hour intervals out to six hours (one proposal I reviewed earlier this week) for prob. TOR, SIG TOR, hail, SIG hail, SVR wind and SIG SVR wind for -- literally -- hundreds of geogridded locations in each CWA is most certainly a non-trivial amount of bandwidth.

Even in an active weather situation, these grids will be made up of 0s at most grid points. There is a special branch of computational mathematics built around these so-called "sparse matrices", and they've found that they compress incredibly well.

I'll even back up a step... Most TV weatherpeople probably have never heard of TDWR let alone use it.

There are 45 TDWRs nationwide (as opposed to 200+ NEXRADs), and given that the graphics vendors only within the last couple of years have made it widely available, this isn't a big surprise. That said, I know quite a few who know it exists and either use it or want to be able to but can't because of vendor, hardware, or software limitations. We've used TDWR on a couple of occasions here to much viewer benefit.

In any event, just because "most" won't use it doesn't mean that "all" won't use it or that others won't come around. However, by not distributing "it" (whether "it" be TDWR data, dual-pol moments or derived products, or probabilistic hazard grids), you're guaranteeing no one will use "it". If you distribute "it", you're at least opening the door for those those who want to use "it" to do so.

Asking the viewers to stand by for a few minutes while he checks KDP is just not an option.

We said the same thing about velocities 15+ years ago, but now, they're commonplace on TV stations from the smallest local shop all the way to TWC. The bottom line — subject to availability — is this: The products that provide value to the viewer or to the meteorologist will get shown on TV, whether they're reflectivity, velocity, KDP, confidence plots, or something we haven't even come up with yet.
 
Last edited by a moderator:
I'm saying that DP is probably never going to be something that you are going to switch to "live" and interpret in real-time on the air... It's much easier for that extra met to be doing it in the background, and switching over to him since he's been reading it and analyzing. Those extra mets are becoming few and far between.

I just can't envision a day when you'll just switch to a DP product and starting showing the tornado debris sign without spending any time looking at it first.
 
Even in an active weather situation, these grids will be made up of 0s at most grid points.

Nate, it is the extreme event I worry about the most in this discussion. The examples we get from Greg's group (see Weatherwise article cited above) are always of an isolated supercell.

Here is a real world extreme event: KS-MO-KY derecho of May 8, 2009 which, by itself, caused more than 400 SVR reports, more than a dozen TORs, a number of deaths, multiple broadcast towers toppled, and winds at several places actually measured above 100 mph. It occurred before dawn in Kansas. As you view the photo below note that thunderstorms are firing behind the derecho along the cold front:
derecho+5-8-9+1032Z+ICT+REFLECTIVITY.png


The meteorologist at KOAM TV (NBC) in Pittsburg, KS (who is probably there by himself/herself at that time of day) is supposed to interpret thousands of prob of TOR, SIG TOR, SVR WIND, SIG SVR WIND, HAIL, SIG SVR HAIL at half-hour intervals for Pittsburg, Joplin, Chanute, Parsons, Coffeyville, Independence, Webb City, etc., etc., etc.?

I suspect Greg will jump in and say, they can be plotted on a map. They can! But, given the fact that there are supercells (with tornadoes) ahead of the derecho, the derecho itself, and more thunderstorms firing behind the derecho, that map will just be hash.

Given that Chanute, KS (CNU) is in the path of two hook echoes and later received 80+ mph winds, isn't the smarter message, take cover!?
 
The meteorologist at KOAM TV (NBC) in Pittsburg, KS (who is probably there by himself/herself at that time of day) is supposed to interpret thousands of prob of TOR, SIG TOR, SVR WIND, SIG SVR WIND, HAIL, SIG SVR HAIL at half-hour intervals for Pittsburg, Joplin, Chanute, Parsons, Coffeyville, Independence, Webb City, etc., etc., etc.?

If all the met is looking at is BR1, he's not able to determine where the highest threats are anyways. Having WoF point that out should be welcomed.

I suspect Greg will jump in and say, they can be plotted on a map. They can! But, given the fact that there are supercells (with tornadoes) ahead of the derecho, the derecho itself, and more thunderstorms firing behind the derecho, that map will just be hash.

That's why we have a private sector component to weather, and it's also the reason I tell everyone going to school these days for weather to take as many programming courses as you can. While you are stumped on options here, I know that several on this list are coming up with ways to display the data in a format that TV weathercasters will be able to easily pull up, and the viewer will be able to easily understand, and in a format that smartphone devices could tap into. At this point I realize you can't make that vision leap, so you'll have to trust us (I imagine if you go to another private weather vendor and ask them to show you their ideas, they might be reluctant ;) )
 
I just can't envision a day when you'll just switch to a DP product and starting showing the tornado debris sign without spending any time looking at it first.

In 1990, nobody could envision just switching to a velocity product and showing velocity signatures without spending time looking at it first, either, but it happens with regularity these days — especially in smaller markets with limited staffs.

Nate, it is the extreme event I worry about the most in this discussion.

...snip...

The meteorologist at KOAM TV (NBC) in Pittsburg, KS (who is probably there by himself/herself at that time of day) is supposed to interpret thousands of prob of TOR, SIG TOR, SVR WIND, SIG SVR WIND, HAIL, SIG SVR HAIL at half-hour intervals for Pittsburg, Joplin, Chanute, Parsons, Coffeyville, Independence, Webb City, etc., etc., etc.?

In all fairness, the same could be said about all of the TDWR products, additional NEXRAD sites, DP products, and the like you (rightfully) advocate — all of which come (or will come) at much higher update frequencies than the PHI will.

Meteorologists have been drinking from the firehose for ages, including during severe weather. First, it was composite reflectivity, then it was NIDS, then multiple radars, then Level 2 moments, and so on. Yet, in spite of all this extra data, the death rates continue to go down. The only conclusion we can draw here is that the additional data isn't hindering the meteorologist — if anything, it is arguably helping in lowering the death rates.

So, simply providing additional data isn't going to, in and of itself, be a problem. Now, it's fair to ask whether the data will be valuable, and if it is, how the data can, should, and will be used in an operational sense. That includes whether and in what form this information is communicated with the public, an issue about which you and I share some concerns. That should not, in an of itself, prevent useful information from being shared with meteorologists, however. Taking your extreme event example, those PHI products (either mapped or somehow associated with radar-identified storm cells, e.g. the idea behind the "Baron Tornado Index") could help a frazzled forecaster determine where to focus their limited time and attention, answering the question, "which storm is most severe or most likely to cause impact to life and property?"
 
Back
Top