"Too Much" Lead Time?

  • Thread starter Thread starter Mike Smith
  • Start date Start date
In 1990, nobody could envision just switching to a velocity product and showing velocity signatures without spending time looking at it first, either, but it happens with regularity these days — especially in smaller markets with limited staffs.

Maybe your examples are better than mine, but I have plenty of time watching Indianapolis stations during sevwx and in almost every case I've seen (except for one day of isolated supercells with clear couplets) they show it wrong. They seem to be drawn to any shade of red near a shade of green and call it a rotation. 10-20 miles behind the storm in the trailing stratiform.

I'd hate to see them trying to interpret DP on the fly (or with advance glances) too. I don't think a majority of TV weathercasters are able to properly interpret velocity, and think the number who can do DP will be significantly lower.
 
Rob,

I completely share your concern about some TV meteorologists today. But, if you were to come to Plains TV markets (i.e., DFW, OKC, ICT, etc.) you'd find it is a whole different ball game.

The night of the Greensburg tornado the ICT TV mets were not only showing velocities, they correctly identified a tornado debris ball on the fly. In Warnings, I tell the story that within 4-5 minutes of the tornado hitting Greensburg one of the local medical helicopter services was on her way to scramble the troops based solely on the TV coverage.

From where I sit, the data mission of the NWS is to make everything available in real time and then we can sort out how to use it.

Mike
 
I totally believe it's different there, and a WoF grid might not have the same level of value for the TV mets.

The mets for most of the rest of the country would probably LOVE the hand-holding of a WoF grid.
 
Maybe your examples are better than mine, but I have plenty of time watching Indianapolis stations during sevwx and in almost every case I've seen (except for one day of isolated supercells with clear couplets) they show it wrong. They seem to be drawn to any shade of red near a shade of green and call it a rotation. 10-20 miles behind the storm in the trailing stratiform.

Someone else's inability to use the tool should not prevent the tool from being available for those who are able. I will concede there may be a regional difference in that ability (e.g. folks in the Plains being better/more comfortable/more accurate with it than folks in a less-severe-weather-prone part of the country) but to prevent those with the initiative from ever accessing or using what could be life-saving data so that we don't risk misuse by the lowest common denominator seems a bit... not good.

I agree with Mike: More data, not less.
 
Someone else's inability to use the tool should not prevent the tool from being available for those who are able

You lost me now ;) DP data is already being sent out from the OUN site, I've not heard any news that the stream will be cut off at some point. It's there and more will be available in time. TDWR exists and more will be available in time.

My contention is that WoF can be fairly important to a good segment of the TV WX industry too.
 
The examples we get from Greg's group (see Weatherwise article cited above) are always of an isolated supercell.
The isolated supercell example was used to introduce the concept and make it understandable to the audience. You certainly don't expect us to show a complex event when describing the concept, do you? Nevertheless, I've already stated earlier in this thread that the concept works for any shape hazard area, be it an isolated cell, line, and any odd shape.

I just read that Weatherwise article. Interesting to see one of our old figures of the PHI concept (have since been revised) in the article, in as such that I don't recall being asked for it. But there are several portions of the article that I would like to address:

"Caption: This image is an example of the graphic warnings that NOAA' National Severe Storms Laboratory is developing for the National Weather Service."


Not entirely correct. NSSL is not "developing these warnings for the NWS." NSSL is testing (or tested) the concept of probabilistic hazard information grids to deliver high-resolution highly-detailed more-robust data as a way to improve communication of hazards. It is still up to the NWS to 1) adopt the concept (as is, or more likely a refined version), and 2) to determine how they are going create new products from those grids. The r&d is still many years away from any of this.

"For example, all weather forecasts are, at least to some extent, probabilistic. This can lead to one kind of disconnect, Thomas says: “Hospital administrators and physicians must understand how the forecast relates to them and their work. If we say there is a 20 percent chance [of a particular event occurring], that's confusing. They look at it as ‘only 20 percent’ and not a one-in-five chance.” For example, if officials consider that there is a one in five chance that storm surge from a fierce hurricane will flood a coastal hospital's ground floor, while 100 mph-plus winds threaten windows on the upper floors, they might decide to undertake the dangerous and expensive evacuation of a hospital before the water and wind begin to arrive."

In all honesty...I find it somewhat concerning that a hospital administrator, who I would assume should be good at numbers, can't easily convert 20% to a one-in-five chance. But it does provide an example of a user who requires hazard forecast uncertainty information. See, those users do exist!

This also highlights an important facet of communicating uncertainty - it is all in how the message is framed. So, starting from a digital data base of grid values between 0 and 1 (percentages), one could use technology that can convert those numbers into value for the user - be it a recommended decision based on the users' cost-loss ratio, a recommended decision based on what a 3rd party determines is best for them, a "threat level" (again, defined by someone), a number between 0 and 1, a number between 0 and 100 with a % sign attached to it, a number between the words "a one in _____ chance", etc. The private sector can play a large role by taking the digital probabilistic data from the NWS and tailoring it specifically for that hospital's needs.

"Mike Smith, the Founder and CEO of WeatherData Services, Inc., in Wichita, Kansas, which is now an AccuWeather company, explains how his company provides specific tornado information that hospitals need: 'A National Weather Service tornado warning means there is a high probability of a tornado at some location within their warning area. However, hospitals have found that using NWS warnings creates too many false alarms. One of the most important benefits of our warning service for hospitals is to give them a null warning, to let them know that even though a NWS tornado warning is in effect in their area, the hospital itself will not be hit.'"

So in effect, you are telling your clients to disregard the NWS warnings because they do not contain enough specific information for that hospital. So here's your dilemma - the PHI concept is designed to improve and evolve the way in which hazard information data is delivered from the NWS to be able to provide more site-specific information about hazard times of arrival/departure, uncertainty information, threat type, etc. This is all about the NWS progressing their hazard services, and not stagnating. Given this kind of information from the NWS, how do you see your customized services to your clients evolving?

Here is a real world extreme event: KS-MO-KY derecho of May 8, 2009...The meteorologist at KOAM TV (NBC) in Pittsburg, KS (who is probably there by himself/herself at that time of day) is supposed to interpret thousands of prob of TOR, SIG TOR, SVR WIND, SIG SVR WIND, HAIL, SIG SVR HAIL at half-hour intervals for Pittsburg, Joplin, Chanute, Parsons, Coffeyville, Independence, Webb City, etc., etc., etc.?
Yep - data grids are comprised of thousands, no, millions of data points, yet meteorologists interpret grids like that today: millions of sample volumes from each radar, each volume scan, each elevation scan, each parameter? How are those data interpreted now - via decision support systems that compile the data into useful decision-assistance information. Fast forward a few years and add to the data fire hose: dual-pol data, phased array data, CASA data, etc. One way to help manage those data is to assimilate them into a system that can provide hazard guidance grids. But like all guidance, use is optional.

I suspect Greg will jump in and say, they can be plotted on a map. They can! But, given the fact that there are supercells (with tornadoes) ahead of the derecho, the derecho itself, and more thunderstorms firing behind the derecho, that map will just be hash.
Like this map? (the same event) That's a hash of warning polygons, with very little real information attached to them.

4d330d66_3e4b_0.jpg
eFC4P5
eFC4P5
 

Attachments

  • 4d330d11_7eae_0.jpg
    4d330d11_7eae_0.jpg
    20 KB · Views: 43
Last edited by a moderator:
Given this kind of information from the NWS, how do you see your customized services to your clients evolving?

Yep - data grids are comprised of thousands, no, millions of data points, yet meteorologists interpret grids like that today:

Like this map? (the same event) That's a hash of warning polygons, with very little real information attached to them.

View attachment 5493
eFC4P5
eFC4P5

Hi Greg, Here are the answers...

#1. We have a very good idea how our warning service is going to evolve but we don't discuss that publicly.

#2. and #3. The message in the polygons was "tornado warning, take cover!" Binary, easy to understand. Now, imagine supering probs of TOR, SIG TOR, SVR WIND, SIG SVR WIND, SVR HAIL, and SIG SVR HAIL and, as I said previously, you have hash.

That said, for someone who doesn't wish to go through all the (multiple) threads on this topic:

I do not object to the program as a research program. I do object to what I believe is the premature "selling" of this program years before it is ready for "prime time."

I do not want to see the NWS remain stagnant. I have supported and will continue to support improved science and services. I do object when NOAA/OU officials point to specific industries (hospitals keep coming up) who they indicate the program is designed for (even if it really isn't). Such statements confuse the marketplace.

The NWS, from where I sit, has far higher priority issues than WoF, such as the bandwidth problems and the CSI/FAR (as opposed to lead time) that we have previously discussed. I would like to see those issues addressed with greater urgency, which is why I don't show a lot of enthusiasm for WoF. Fix those issues and my enthusiasm for WoF goes up.

Those are my thoughts for those new to this discussion.

Mike
 
#2. and #3. The message in the polygons was "tornado warning, take cover!" Binary, easy to understand. Now, imagine supering probs of TOR, SIG TOR, SVR WIND, SIG SVR WIND, SVR HAIL, and SIG SVR HAIL and, as I said previously, you have hash.
And (again), users don't have to see the numbers, and you can extract binary "take cover" warnings from the grids. Or binary "hike back to the car" warnings, or binary "drive out of the path" warnings, etc.

The NWS, from where I sit, has far higher priority issues than WoF, such as the bandwidth problems and the CSI/FAR (as opposed to lead time) that we have previously discussed. I would like to see those issues addressed with greater urgency, which is why I don't show a lot of enthusiasm for WoF. Fix those issues and my enthusiasm for WoF goes up.
And (again), FAR, CSI, POD, LT all go hand-in-hand. Increase lead time now by extending warning durations, but what happens? FAR goes up because of the uncertainty at those forecast periods. Improve very short-term forecasts with aid of convection-resolving numerical models, decrease uncertainty at the same forecast periods, deliver the same warning duration, and FAR is lower. It's all in what "threshold" a forecast decides to make a deterministic decision, and that threshold is based in uncertainty (or, properly said, certainty). If NWP allows us to decrease uncertainty in the forecast, then the same uncertainty (probability) threshold in a future forecast will be farther out in time, hence increasing lead time.

And, I'll state one more time, Warn on Forecast is in my opinion (see disclaimer) is a misnomer. Warnings are forecasts, just for short time periods. The warning paradigm will move from one based on detection and extrapolation, to one based on evolution aided by NWP. The correct term should be Warn-Using-NWP.

I think one of the real challenges with Warn-Using-NWP is going to be the "confirmation factor" with warnings. Social science shows that a lot of folks seek confirmation when a warning is issued, either by looking outside, calling someone, turning on the tv, etc., or if savvy, looking at radar trends online. What will the future be like when an NWP-aided warning is issued for a hazard when that hazard (or even the storm) does not yet currently exist?
 
Last edited by a moderator:
And (again), FAR, CSI, POD, LT all go hand-in-hand. Increase lead time now by extending warning durations, but what happens? FAR goes up because of the uncertainty at those forecast periods.

Suppose someone discovers a previously unnoticed radar signature that indicates a tornado is going to lift within the next 5 minutes with near 100% certainty. That means, if the warning expires in six minutes, a downstream warning does not need to be issued. No downstream warning in this case = no false alarm. POD and FAR can be improved within the current system. Improvements in either may or may not affect lead time.
 
Suppose someone discovers a previously unnoticed radar signature that indicates a tornado is going to lift within the next 5 minutes with near 100% certainty. That means, if the warning expires in six minutes, a downstream warning does not need to be issued. No downstream warning in this case = no false alarm. POD and FAR can be improved within the current system. Improvements in either may or may not affect lead time.
While I understand your point, this scenario is highly unlikely, given the limitations of radar data especially at far ranges. The best we can hope for is some jump in certainty of tornado demise based on some newly discovered signatures. Or hope for S-band PARs every 20 km!

Forecast certainty does not always increase as a forecast period decreases, but for the most part it will.
 
...Here is a real world extreme event: KS-MO-KY derecho of May 8, 2009 which, by itself, caused more than 400 SVR reports, more than a dozen TORs, a number of deaths, multiple broadcast towers toppled, and winds at several places actually measured above 100 mph. It occurred before dawn in Kansas. As you view the photo below note that thunderstorms are firing behind the derecho along the cold front:
derecho+5-8-9+1032Z+ICT+REFLECTIVITY.png


The meteorologist at KOAM TV (NBC) in Pittsburg, KS (who is probably there by himself/herself at that time of day) is supposed to interpret thousands of prob of TOR, SIG TOR, SVR WIND, SIG SVR WIND, HAIL, SIG SVR HAIL at half-hour intervals for Pittsburg, Joplin, Chanute, Parsons, Coffeyville, Independence, Webb City, etc., etc., etc.?

I suspect Greg will jump in and say, they can be plotted on a map. They can! But, given the fact that there are supercells (with tornadoes) ahead of the derecho, the derecho itself, and more thunderstorms firing behind the derecho, that map will just be hash....

I do not understand this thought process at all. I realize that the above market is like market 149? and unless the met has been at the station for a number of years, most who pass through that market do exactly that, pass through, however, unless they have "dumbed down" the skills needed for that position, I can't see questioning the mets educational experience. To me thats exactly what the above statement is, and even if it isn't or even if the met on duty is the weekend or fill in person they should absouluty be able to understand/read those probabilities. This is required in the job title.


...Given that Chanute, KS (CNU) is in the path of two hook echoes and later received 80+ mph winds, isn't the smarter message, take cover!?

Maybe I am not understanding the above quote but are you asking for a blanket warning? Something your publically disapointed in? Really does not make much sense to me. Can you explain?
 
I almost hate to revive this thread but I found a peer-reviewed paper on this topic and it does appear that more than 15 minutes is "too much" lead time. Authors are Simmons and Sutter, Weather and Forecasting, April 2008, pp. 246-258.
 
While the paper does say that a 1-minute increase in lead time for lead times past 15 minutes leads to a 0.6% increase in fatalities, that number came with a whole boatload of caveats. Some caveats include:
-the fact that the 95% CI included 0
-the fact that increased lead time decreased the number of injuries (just not fatalities)
-When 5 of the most deadly and longest lead time tornadoes were removed, the sign on the above tendency flipped to negative (implying that longer lead times might decrease fatalities after all)
-There are other factors such as how people responded to the warning that weren't accounted for in the study and could impact the conclusion.
-As an example,
Simmons and Sutter (2008) said:
These regressions cannot be used to evaluate the effectiveness of warnings, since omitting observations to obtain results is hardly sound social science. But these results demonstrate that a conclusion that lead times over 15 or 17 min increase fatalities relative to no warning is unwarranted. Of the five tornadoes, Aguirre (1988) claims that only two residents of Saragosa, Texas, received the warning for the Reeves County tornado despite the 22-min lead time, and two others were part of the February 1998 Florida outbreak, which occurred at night in areas without tornado sirens. Thus, warning dissemination might well have failed for these storms.
-The definition of lead time in the study for long-track tornadoes is ambiguous due to the difference between someone living near the point of touchdown having less lead time and someone living much farther downstream but still being in the warning box. (they actually state this themselves)
-There is a much higher FAR for tornadoes with lead times of > 15 minutes, so people may be discounting warnings more when no tornado occurs within 15 minutes of a warning being issued (even if one ends up occurring more than 15 minutes later) than they do for shorter lead time tornadoes.

The results of the study did seem to indicate a point of diminishing returns for both fatalities and injuries around the 15 min lead-time value such that, even if you get decreased fatalities for longer lead times, it might not be as much in the 15+ minute range as in the 6-10 or 11-15 minute range. Thus perhaps it isn't worth the investment.
 
Back
Top