"Too Much" Lead Time?

  • Thread starter Thread starter Mike Smith
  • Start date Start date
Comes back to the general public

Even in the advanced tech stage we are in not everyone is going to receive advanced warning and not everyone is going to heed them.

St. Louis being a large metro area had ample warning via NWS & local media outlets. It was also a holiday (NYE) morning-early afternoon when highways were likely less congested and more people were likely home within earshot of a siren system. In addition the storms had spawned damage from the early morning hours in Arkansas and SW Missouri adding to the lead time.

Go back over a decade when Oklahoma City was hit. Lead time on that storm was atleast 1 hour with constant live coverage from TV choppers & ground crews as well as NWS warnings. While today far more people have smartphones the time of day was critical albeit over 40 people were still killed in the storms that hit late Monday afternoon.

It really depends on how a individual will react to any warning. I've heard many times unless I see it on TV I'm not worried or I never heard a siren but I knew to get to my "fraidy" hole when the sky turned green.

Alot of people never get a NWS warning relayed through local media because of the influx of satellite and cable. If your provider has no channel interrupt system to relay a warning you'll likely never get it unless you are on a local channel.

Few have wx radios and even those with smartphones may not heed the warning in time or pay attention to it.

A disturbing trend is radio stations not interrupting programming for NWS warnings if the warning does not include their metro ADI. Counties adjacent to the station's county are warned but no warnings are relayed.

So take your pick of notification systems... Cell/smartphone, Radio, TV, computer, carrier pidgeon if the warning message (1) is never passed on how many lives are at risk and (2) the warning is active for 15-30-45 minutes in advance yet 40 % of the population takes action while another 50% ignores it and 10% never receive it what of technology then? Did the advance warning work?

It comes down to the individual taking responsibility to be weather aware and being ready. You can never get 100% commitment from the general public during any event no matter how much lead time... A blizzard warning is issued days in advance half the public gets ready the other half hops in the car to go to Grandma's for Christmas dinner with only a 1/4 tank of gas and flip flops on. You tell me how far the message got.

Same goes with tornado warnings no matter the time of year as we have seen recently and no matter the tiime of day in the OKC event, heads up can be minutes, hours or days in advance and yet the tree still fell in the forest but how many heard it?

I commend all those who have got the warning lead time to where it is today. A perfect system? No. Can there ever be one? Not likely because the humans on the other end are just that and prone to make mistakes and risk their lives even though they thought, "that looks like a tornady over there and it sounds like a freight train a-comin'."
 
A perfect system? No. Can there ever be one? Not likely because the humans on the other end are just that and prone to make mistakes and risk their lives even though they thought, "that looks like a tornady over there and it sounds like a freight train a-comin'."

Nobody is shooting for "perfect" (well, nobody expects to get there.) Does that mean we shouldn't bother improving? Looking at new ways of disseminating warnings? Non-conventional broadcast means? Seems like you are saying yes. I disagree.
 
Greg, how do you envision such a concept operating in practice? Are the various warning areas and probabilities dilineated by a computer algorithm or with human analysis? What about refreshment - at fixed intervals or at variable intervals based on actual development of the storm, etc.? How about overlap of 2 or more storm tracks - simply an additive probability? I can think of 1001 questions on this topic; it just seems the concept would need to be thoroughly vetted to demonstrate superiority over the current warning system.
This is easier to describe with graphics, but I'll try words for now.

What we experimented with in the Hazardous Weather Testbed (HWT) in 2007 and 2008 as a first cut were probabilistic threat areas that moved along with the threats over time in one-minute steps. The initial threat areas were defined by the meteorologist. Storm motion (speed and direction) uncertainties were also provided. The accumulated threat areas over time automatically defined the digital gridded swaths at 1 km x 1 km x 1 min resolution. The motion uncertainties make the threat areas grow and the probabilities drop with time, resulting in fanned swaths that look similar to hurricane strike probability swaths. The swath length is based on a chosen warning duration, but because they move along with the hazards, each location downstream gets equitable lead time. Probabilities were also assigned by the meteorologist at the current time, and a future time (to account for storm strengthening or weakening), and these probabilities are multiplied by the motion uncertainty probabilities (values 0 to 1). The meteorologist monitored the moving threat areas to make sure they remained aligned and overlapping the storms as time marched on. About every 15 minutes, the meteorologist would adjust the threat area (an interactive "shape object" on the computer display) to account for any changes in storm motion of shape/size of the hazard area and the computer automatically recalculated the moving probabilistic swaths.

If there are multiple threat areas in close proximity such that the swaths would overlap downstream, the resulting values are based on the maximum of each swath for each grid point. The advantage of our system is that a user at a point that might be impacted by two or more threats can access the digital grid and get a "future trend" plot of threat probability and determine multiple times of arrival and departure of each threat. Today, multiple storms are often covered by a single polygon, and a user may have no idea that once the first threat is over, another threat might be looming a few minutes away.

This concept works for any shape hazard area, be it an isolated cell, line, and any odd shape. In the future of NWP-assisted warnings ("Warn On Forecast", if you will), the future threat state (position, shape, and intensity) might be provided rather than calculating it based on advecting a current threat area with a motion vector. The resulting swath would be based on an accumulated morphing (or "tween" for Flash developers) of the current and future states.

Yes, we all agree that the idea needs to be thoroughly vetted. That is one reason why we are are strongly aligned with WAS*IS and SSWIM. Bear in mind that the concept we tested in 2007 and 2008 is nothing set in stone, and will require some adjustment. We got tons of feedback from the visiting NWS meteorologists who participated in the HWT testing.
 
We got tons of feedback from the visiting NWS meteorologists who participated in the HWT testing.

Greg,

I invited two meteorologists from the TUL NWS to my luncheon speech today. One brought up, and was quite enthusiastic about, probabilistic warnings. I asked him the same question I have asked you (and others), What is the problem you are trying to solve?

Like our early exchanges, "hospitals" were brought up. As you probably know, the current issue of Weatherwise has an article by Jack Williams ( http://www.weatherwise.org/Archives/Back Issues/2010/November-December 2010/plan-seasons-full.html ) that discusses the special requirements of that industry. The article makes clear hospitals are being successfully served by the private sector weather industry.

Once we eliminated special industries as justification for WoF, it boiled down to "emergency managers might use it." In a world of finite resources, this hardly seems justification. If you wish to email me, I'll give you the name of the met if you wish to speak with him about our conversation.

Yes, I concede the WoF program is quite popular with NWS meteorologists. But, I want to stress that I believe there is a major disconnect between the needs/desires of the broad user community and the NWS meteorologists' intrinsic desire to see this program go forward. Again today, without any prompting, I heard about false alarms being a problem from the audience in attendance for the speech.

While I realize that this is probably an exercise in futility, I urge the Norman research community to consider whether it is accuracy or lead time where the bulk of the development work should be done. I vote the former.

Thank you for considering my point of view.

Mike
 
Last edited by a moderator:
What is the problem you are trying to solve?
1) Conveying uncertainty in the forecasts
2) Extending lead times
3) Providing a seamless flow of hazard information across all forecast time scales
4) Providing equitable lead times to all users
5) Providing point-specific warning information such as time of arrival and departure of individual hazards
6) Providing a more robust way to verify warnings
 
My thoughts:

1) Conveying uncertainty in the forecasts
a)Creating a test bed in Norman, OK will not create a system to calibrate "certainty" for severe storms in California, Arizona, during hurricane landfalls, etc.
b) I have yet, in 30 years of running WeatherData, and in an additional ten years as a television meteorologist, ever heard any clamor for "certainty" information. As I have stated before, we have, numerous times, tried to interest our clients in probabilistic information and they just want "yes or no?"


2) Extending lead times
Set aside the concerns about "too much" lead time in STL. Since we don't understand tornadogenesis or tornado decay, this is an impossible goal at the present state of the art, especially with less typical tornado situations such as hurricane landfalls. We need to do the research first, then work on lead times.

3) Providing a seamless flow of hazard information across all forecast time scales
Where is the demand for this? I would like to see smaller watches with, say, a maximum 4 hour duration. That would be the "yellow" light with the warning the red light. There is a real danger of information overload with these new products.

4) Providing equitable lead times to all users
See #2.

5) Providing point-specific warning information such as time of arrival and departure of individual hazards
The education and technology (i.e., replacing siren networks) needed to make this work will take, at minimum, 25 years. This was part of the problem in STL because of the "all or nothing" nature of the siren network. By working to improve the accuracy of warnings within the current system, we will save more lives and get more "bang for the buck."

6) Providing a more robust way to verify warnings
I'd like to learn more about this one.

I note that decreasing the number of false alarms is not listed here. This is what I mean by the disconnect. The public, I believe correctly, senses too many warnings are issued. Yet, improved FAR is not even listed among the goals.

Mike
 
My thoughts:

1) Conveying uncertainty in the forecasts
b) I have yet, in 30 years of running WeatherData, and in an additional ten years as a television meteorologist, ever heard any clamor for "certainty" information. As I have stated before, we have, numerous times, tried to interest our clients in probabilistic information and they just want "yes or no?"[/I]

That's because you never provided that information to them, or provided it but called it something different. I can think of many times I've said "This storm is bad right now, but there's a chance [greater than other storms that day] of a tornado developing soon." How can you say that statement is NOT of value or that people don't want that info?
 
My thoughts:
I note that decreasing the number of false alarms is not listed here. This is what I mean by the disconnect. The public, I believe correctly, senses too many warnings are issued. Yet, improved FAR is not even listed among the goals.

I recall having been told in one of my higher level analysis/forecasting courses that a study was done some time ago, the result of which basically stated that people thought it was unacceptable to be killed by a tornado for which there was no warning. I think of this anecdote when I think about NWS tornado warning policies. It seems to me (and obviously, NWS personnel on this forum, please correct me if I am wrong) that the NWS is more concerned with POD than FAR, which could very well be why improving FAR is not mentioned. However, I do see how that is part of the problem, but one needs to understand that in the real world, improving one generally leads to worsening the other. Thus somewhere, someone has to find a "magic number" to balance the POD and FAR (at least, until better technology and understanding of tornadic processes can lead to both skill measures improving together).
 
That being said had this tornado outbreak of happened in the 1950s or 1960s there would have been very little and a lot of casualties. It probably would of killed 50-100 people back in them days. Even though there were no violent tornadoes(EF4 or EF5)associated with this outbreak it still had 11 strong tornadoes with 7 of them being EF3's. An outbreak of this magnitude is unheard of for the last part of December and lead time was excellent from NWS offices.
 
I wanted to clarify what I mentioned much, much earlier in this thread. My thoughts on the severe weather day timeline or the mentioned "Severe Weather Outlook" is that it need to be oriented towards the public in a different way, for example through TV meteorologists. I can count on my hands the number of non-meteorology people who know about and read the Severe Weather Outlook.

Secondly, I must mention that it's possible that the public's perception of too many warnings issued may be because of the size difference between one person's house and the size of the warning. It's something I refer to as the Personal False Alarm Ratio (PFAR). Take this example: I live in Norman, and I've been in 5 tornado warnings this year. Let's say three of these warnings of them were for actual tornadoes. The catch is that none of these tornadoes happened near my house. Since I didn't see the tornadoes or experience them, then my PFAR is 0/5. My local weather service office says their False Alarm Ratio is 2/5. Even still, the person who doesn't experience or personalize the threat may be conditioned to the warnings because of their PFAR. This happens all the time in the Mid-Atlantic (not so much since last winter though) with snow. The big question is how do you verify something like weather when the observation of it is always from a certain perspective?

And finally, I don't know a whole lot about Warn-on-Forecast. I understand the concept, but am concerned about it from the user-end point of view. I guess I need to speak about it more with the WAS*IS and SSWIM folks. Until I have a better understanding of it, I'm thinking the biggest understanding for reducing FAR will most likely come from the Phased Array Radar (PAR). Having looked at some of the data from it and seen the detail it shows of the evolution of a supercell and especially storm-scale motions, I think the pattern recognition aspect and ability to see temporal and spatial resolution increases will significantly help decrease the the average FAR. This is granted that I am thinking further into the PAR's abilities than I should be and am working from one set of data. I'll present an analogy to what I'm thinking when I mention the benefits from the PAR's increase of the temporal resolution. Consider the fact that we collect unified, global upper air data twice a day. When we plot this data, the artifact from the low temporal resolution implies that features such as jet streaks "move" when in fact they propagate. If we took more unified upper air observations throughout the day we would most likely see small, however significant differences in how the atmosphere evolves. The same benefits could be seen with a change from the WSR-88D to the PAR.

I will add that I also believe that tools like Dual-Pol will also contribute to lower FAR, mainly because I've seen the strength of radar analysis techniques (the Lemon technique, identifying three-body scatter spikes) in identifying storm severity. I believe that there is potential for similar techniques to be developed in ZDR, Rho..etc. products.
 
I understand the concept, but am concerned about it from the user-end point of view.

Remember first and foremost that if the "end user" wants the same exact watch/warning format that he gets today, that won't change. WoF adds for those who want more.

I will add that I also believe that tools like Dual-Pol will also contribute to lower FAR, mainly because I've seen the strength of radar analysis techniques

Honestly I can't seeing that impacting the FAR one bit. It might (should) help POD, but I don't envision a case where a met is going to NOT warn on a particular storm because of something extra he sees in DP.
 
Showing my age, I have used radars that continuously rotated (3 rpm = every 20 seconds) and didn't do volume scans. Phased-array is "back to the future" in that regard. Yes, it can do 3-D but we did more or less the same thing with the RHI scope; we just did it manually. So, while we are all in favor of phased-array for its greater temporal resolution, I can assure you that is not a magic bullet. I'll explain in more detail if anyone wants.

I can think of many times I've said "This storm is bad right now, but there's a chance [greater than other storms that day] of a tornado developing soon." How can you say that statement is NOT of value or that people don't want that info?

Rob, your comment is right on the money! This is done by the "end" meteorologist (private sector, broadcast, etc.) who is in direct communication with the end user*. But, I do not believe we have the scientific skill to put accurate and reliable numbers for that contingency in all situations and in all geographic areas. Even if we did, given NWS communication bandwidth limitations (which is a real issue at the moment!), it is difficult to see how adding these products over, say, additional TDWR data (which cannot be added at the present time due to bandwidth problems), would be of greater benefit.

This is what I mean by "finite resources." It isn't just money and people, there are lots of limitation problems at the moment with the upcoming D-P, TDWR, finer scale models (which create more datapoints), etc., etc. Where does the NWS get the most "bang for the buck"? I just don't see probabilistic warnings as being the highest "bang for the buck" at present or in the near future.

* and, before someone accuses me of wanting to limit the NWS, it is fine with me if someone says this on NWR.

Mike
 
My thoughts:

1) Conveying uncertainty in the forecasts
a)Creating a test bed in Norman, OK will not create a system to calibrate "certainty" for severe storms in California, Arizona, during hurricane landfalls, etc.
You aren't the last one that has assumed that because the HWT is physically located in Norman, OK, that we are only looking at Oklahoma data sets. We can localize our national systems to any WFO in CONUS, and have done experimental warning exercises using live data from many locations in the country.

Similarly your company doesn't only provide weather information for Wichita clients!

b) I have yet, in 30 years of running WeatherData, and in an additional ten years as a television meteorologist, ever heard any clamor for "certainty" information. As I have stated before, we have, numerous times, tried to interest our clients in probabilistic information and they just want "yes or no?"
Hence my "preemptive rebuttal" from a few days ago. Your experience simply does not match the experience of others. There are many weather sensitive industries that use probabilities in their decision models, but perhaps you've not encountered any in your customer base. And some of our interaction with EMs, once exposed to the concepts, understand the benefits. Nevertheless, I won't debate you any more about this since you will never agree.

2) Extending lead times
Set aside the concerns about "too much" lead time in STL. Since we don't understand tornadogenesis or tornado decay, this is an impossible goal at the present state of the art, especially with less typical tornado situations such as hurricane landfalls. We need to do the research first, then work on lead times.
What do you mean by "do the research first"? The research is ongoing and continuing. The researchers are improving forecast certainties at longer forecast periods. But the result isn't as simple as "increasing lead times" regardless of who has said this to you!

The big question to ask: At what point does a forecaster issue a warning, and what warning duration do they choose? This is a deterministic decision that is based on the forecasters' level of uncertainty. They mentally choose some level, and turn the key at the forecaster's threshold, and not the users'.

I'm a user. How do I know what that uncertainty level is for the warnings I receive? Does it vary from forecaster to forecaster? [yes!] Will I ever have any information about storms that are just below that uncertainty threshold, either occurring now, or downstream from a present warning? Only if I'm savvy enough to search for it online and it isn't always available for every storm all the time. Do I have a technological device that can provide me that information? Yes. Is there a digital data stream or service available to provide that information to my technology. NO!

3) Providing a seamless flow of hazard information across all forecast time scales Where is the demand for this? I would like to see smaller watches with, say, a maximum 4 hour duration. That would be the "yellow" light with the warning the red light. There is a real danger of information overload with these new products.
I have spoken to users at various workshops over the past year that would prefer seamless information about weather hazards in space and time versus using just two deterministic watch and warning products. For example, many folks would like information between the watch, and warning scales. I'm a user...A watch gets issued for the next 8 hours...should expect the severe weather during the entire 8 hour period? ..the first part of the period? ..the middle part of the period? ..the latter part? What are the chances early on versus later in the watch for my specific location? What digital products offer this to me today?...none.

4) Providing equitable lead times to all users
See #2.
Your response does not address this. Think "threats-in-motion", the moving polygons.

5) Providing point-specific warning information such as time of arrival and departure of individual hazards
The education and technology (i.e., replacing siren networks) needed to make this work will take, at minimum, 25 years. This was part of the problem in STL because of the "all or nothing" nature of the siren network. By working to improve the accuracy of warnings within the current system, we will save more lives and get more "bang for the buck.
The technology to do this is here today. But any legacy delivery system can only deliver data in the legacy format and we've already discussed how the higher-res data can be simplified for these legacy systems ad nauseam. And think of the opportunities for private industry to capitalize and develop technologies delivering this new digital hazard content.

6) Providing a more robust way to verify warnings
I'd like to learn more about this one.
Soon. The research is still under development.

I note that decreasing the number of false alarms is not listed here. This is what I mean by the disconnect. The public, I believe correctly, senses too many warnings are issued. Yet, improved FAR is not even listed among the goals.
It is related to all of this. False alarms are due to the fact that a meteorologist must turn a probabilistic uncertainty into a deterministic forecast/warning. A portion of those warnings will not verify, and thus result in false alarms. If the decision point is drawn at lower certainty, then the FAR would be higher but the POD higher too. If the decision point is drawn at higher certainty, then the FAR would be lower and the POD lower too.

If we were able to effectively communicate uncertainty in the warnings, and still get people to take cover even if our certainty is not 100%, then FAR might be less of an issue. The users would understand that a less certain warning will have a higher FAR and a more certain warning will have a lower FAR (assuming decent calibration and good reliability).

BTW, communication of uncertainty in forecasts is a high priority research topic for the weather-allied social scientists today, so don't assume this can "never be done".

Secondly, I must mention that it's possible that the public's perception of too many warnings issued may be because of the size difference between one person's house and the size of the warning.

I'm addressing that very issue in my warning verification research! And one of the reasons the "public FAR" doesn't match the official NWS stats is because in the present verification system, it only takes one storm report to verify a warning polygon of any size, shape, or duration.
 
Last edited by a moderator:
Greg,

Just curious, then. Based on your research, what is the typical probability threshold for a warning under the current system? I would guess it is subjective to the individual forecaster, but wouldn't this same subjectivity carry over to a probability-based system? Just labeling a scheme "probablisitic" vs. "deterministic" doesn't automatically make it superior. If a probability distribution is involved, then do we have any idea at all what the confidence interval is?
 
I think you can get the current "threshold" by looking at the FAR. 25% of TOR warnings verify, so the "simple version" would be to say that if a forecaster sees a 25% of a severe storm having a tornado, he issues a warning.
 
Back
Top