Tornado Forecasting

Joined
Jan 7, 2007
Messages
177
Location
Troy, MO
I think someone brought this up on the 3/1/07 NOW thread, but I would like to start a discussion here. At any given time over an almost 36 hour span there were several tornado warnings at several locations across the country, yet very few of them produced. I have found in the midwest that a tornado warning means that everyone in the neighborhood runs outside to look at the sky instead of preparing for or taking cover.

I spoke with my mom last night who went through a historic tornado on 4/24/75 in Neosho, MO. She said that they had a tornado watch that evening that expired 10 minutes before their sirens started going off for a strong F4 that was approaching the area seen by the county deputy. Obviously the works of Fujita, Bluestein, and many many others along with advances in radar have helped this process of a faster recognition time. A much larger spotter community helps as well. :)

My questions are these: 1) Are there any organized research projects currently ongoing that will yield a more effective warning process? 2) What do you think will be the next big advancement that will change the way we forecast/warn the public? (ie, is there a experimental radar being used, etc).

This is an exciting time for severe weather enthusiasts with probabilty-based forecasting improving and the new warning system that will be in place next fall.
 
I don't want to say we've done all we can given our current technology, but we're pushing the limits. I think the next big pushes in warning will be phased-array radar and something like the CASA project in Oklahoma where many low-power radars are scattered over the region.
 
My current research project here at UNL deals with the local perception on severe weather awareness and mitigation. The project makes up the bulk of my thesis research entitled "An Assessment of the Potential Impact of a Catastrophic Tornado in Lincoln, NE". Due to my internship with the Lancaster County EMA, I don't feel appropriate revealing my findings for the time being here on a public forum until the project is completed fully.
 
I don't want to say we've done all we can given our current technology, but we're pushing the limits. I think the next big pushes in warning will be phased-array radar and something like the CASA project in Oklahoma where many low-power radars are scattered over the region.

Thank you. That is exactly what I was looking for. What has really got me interested is the GRLevelX radar. I have not used it, but every time I see it I am amazed. I wasn't sure how close we are to really having accurate readings close to the base of a storm and projects like the CASA project and phased-array radar will help this. I think overall the public's awareness and perception of a warning will improve a lot when the new warning system comes out and probability-based forcasting starts becoming more mainstream.
 
The 3D aspect of GRLevel2AE has been an eye-opener, and its ability to add on additional algorithms also a big touch. The university community has had similar features with WDSS II, but preventing outsiders from using that live has kept it away from the masses until GR2AE.
 
Phased array radar will be a great advancement but the big change to the warning system will be what we're calling "Warn on Forecast". As computing power matures in the next decade, very high resolution cloud/ice models will be run frequently (multiple times per hour) and use radar data in their assimilation processes. Right now forecasters wait to see certain visual clues on radar to issue a warning; generally: supercell structure with low-level velocity convergence below a strong mesocyclone. Added to the mix are visual clues from spotters and near-storm environment data from models and local analyses tools.

The idea is the model will be good enough to forecast the development of these structures before the visual (radar/spotter) clues develop...maybe 1 hour or so before they exist. Such models will also improve the near-storm environment analysis tools. It's a long way from here to there, but all of this should allow fewer false alarms, longer lead times with the current skill scores, and better skill scores at shorter lead times.
 
At any given time over an almost 36 hour span there were several tornado warnings at several locations across the country, yet very few of them produced.

Keep in mind many tornadoes go unnoticed even with all the chasers and spotters out there. It is quite possible some of the warned storms did produce brief tornadoes possibly obscurred by rain which simply didn't hit anything. It is also not out of the realm of possibility some non-tornado warned storms also produced unwitnessed tornadoes
 
Keep in mind many tornadoes go unnoticed even with all the chasers and spotters out there. It is quite possible some of the warned storms did produce brief tornadoes possibly obscurred by rain which simply didn't hit anything. It is also not out of the realm of possibility some non-tornado warned storms also produced unwitnessed tornadoes

Even given the above, a success rate of 25% isn't that good...

I agree that many go unreported/unnoticed, but like rdale said 25% is pretty low. I am not in any way trying to insult the NWS here, because I would much rather have a few too many than take a chance on missing one. The main question I had was what things are in the works to improve on this even more and fine-tune it.
 
Phased array radar will be a great advancement but the big change to the warning system will be what we're calling "Warn on Forecast". As computing power matures in the next decade, very high resolution cloud/ice models will be run frequently (multiple times per hour) and use radar data in their assimilation processes. Right now forecasters wait to see certain visual clues on radar to issue a warning; generally: supercell structure with low-level velocity convergence below a strong mesocyclone. Added to the mix are visual clues from spotters and near-storm environment data from models and local analyses tools.

The idea is the model will be good enough to forecast the development of these structures before the visual (radar/spotter) clues develop...maybe 1 hour or so before they exist. Such models will also improve the near-storm environment analysis tools. It's a long way from here to there, but all of this should allow fewer false alarms, longer lead times with the current skill scores, and better skill scores at shorter lead times.

This is very interesting. I wonder how this would be implemented if this were perfected? Would there be an advisory category? Would it actually be good enough to issue a warning an hour out? I doubt that would be the case. If we could get an hour notice on when a tornado was going to occur: 1) many more lives would undoubtedly be saved and 2) we would get some GREAT tornado footage!
 
I think there is WAY too little meso-data to enable something like that... Too many things happen on a scale that is not measured, but are critical to tornado formation, that I have a hard time thinking a model can be THAT helpful.
 
It helps to be a bit more forward thinking, to have an operational system in 10-20 yrs requires designing systems not on what is currently possible but what you expect to be possible by that time. Observing networks and computing power will be considerably advanced from their current state in 20 yrs (think back to what has become available in just the last 10 years alone). These are cutting edge research techniques that may never reach operational status, but you have to begin somewhere.

Anyhow, one of the more promising systems in development would be an ensemble based data assimilation system, one of the benefits being the production of probabilistic forecasts. Warnings out to about an hour or more may sometimes be possible, particularly for 'easy' events (such as an isolated cyclic tornadic supercell). Heck you on occasion see warnings nowadays out to 45 minutes or so (see the tornado warnings from May 3, 1999 for instance).

There are some fundamental science questions about how storms produce tornadoes that should improve our ability distinguish tornadic from non-tornadic storms (hopefully VORTEX2 gets funded and we can learn some of these things). As for the tendency to overwarn, the easy 'fix' is not to warn at all - so no false alarms. If that doesn't sound so great, then you will have to accept that for an increasing success rate in warning prior to tornado touchdown that you also increase the number of false alarms. Current performance pre-warns on ~ 70% of all reported tornadoes, and the false alarm rate is nearly the same (so only about 1/3 of tornado warnings 'verify'). Steady improvements in technology and science will improve warnings, but the current hope is to have 80% detection success and 50% false alarm rate in about 20 years, so it's a slow improvement.
 
Back
Top