Tornadogenesis question

Joined
May 31, 2004
Messages
1,895
Location
Paxton, IL
In an attempt to restore some good meteorological discussion I have posed this question: What will it take to truly understand tornado-genesis? I think as a whole, meteorology students/scientists/avid followers all have an idea of how a tornado forms. (I.E - the processes involved in tornado formation) We vaguely understand the conditions to produce tornadoes but there is never a sure thing. There is always the question of why did Storm A produce and Storm B not. What will it take to get that answer, or will we NEVER know? Even if this question can not be directly answered it would always be great to hear ideas and/or see new research. Vortex II is out again this year and documented what equates to be one of the(if not the) most photographed/document/studied tornadoes in history last June. If scientists draw a blank from this tornado, what's next? Will we ever solve Mother Natures dirty little secret or will it always be a mystery?

Thoughts.

Not having half the background in the field of meteorology as a number of those on this board, I probably can't give an expert opinion on the matter. We understand conditions that produce supercells, we understand some of the physics involved, we even target where to go to find these storms. Just understanding that is a huge achievement to me. Now what gets the scientists over the hump? There have been many encounters within tornadic circulations in the past couple years and I can only imagine most of that data is unprecedented, however, does anyone really believe there is a secret in all of the numbers/video/findings? Personally, I am not sure if we will EVER be able to accurately say what storm will produce a tornado. That would be a monumental occurrence, but also mean more questions. (When will this storm form? What path will it take? How do we improve warnings?) I know there is no containing or accurately deciphering Mother Nature, but with the technology we have today and the new data being innovated.... will we ever?
 
The best prediction is still just an educated guess. Forecast models, ensembles and everything else only gives you insight into what COULD happen. Even the RUC is just a suggestion. You can make the perfect forecast the day before, even 6hrs before an event, but if there is a sudden unforeseen change in the last hour the entire thing could change.

The secret is buried deep I'm sure, if we had the ability to launch weather balloons [or somehow sample the same kind of data] every hour and at many more locations I think we would learn allot more...but we just don't have the resources to do that. Even still, is hour long increments still enough or would we have to sample every 15 minutes.
 
I guess we need to start with the basic questions of why tornadoes form and then move onto how, before we can start talking about why one storm did and why one sure thing busted and one (4/20/04) warm front day went nuts. Things like this fascinate me and it would be interesting to hear everyone's scientific or non-scientific thoughts/theories. Think of what we knew in 1950 to what we know now and we still aren't even close to the tip of the iceberg.
 
I found Erik Rasmussen's talk at Chasercon about tornadogenesis unknowns to be extremely interesting. You can see the slides here from the same (or similar) talk that was given at COD's severe weather symposium:
http://sws2009.cod.edu/

What I took away from the talk is that the computer models love to produce tornadoes. Given the right parameters, they always produce tornadoes. So the big question is why don't real life storms always produce tornadoes? We seam to have a decent grasp on why supercells produce tornadoes through the updraft's stretching of vorticity arches created by the RFD, the aggregation of eddies on the RFD gust front that build up under the updraft base, or tilting and stretching of streamwise vorticity. One or all of these processes look like good candidates for why we have tornadogenesis. Obviously something is missing then on all the storms where these processes could exist, but no tornado formed. Was the RFD not strong enough to create the vorticity arches or shear eddies? Maybe because it was too cold and dry and didn't have the bouyancy needed? Was the occlusion point between the RFD gustfront and the inflow not located in the right place so it couldn't be stretched by the updraft?

Advancements in computer hardware and parallel processing are still making tremendous bounds. In a few decades we will probably be able to run a microscale model of an existing supercell in realtime and see what its going to do next provided we have the current state of the atmosphere. The weak link is going to be getting the existing conditions into the simulation. You'd need a super dense grid of 3D data points, maybe down to inches per point, over a huge area, like several hundred square miles including all the necessary variables like temperature, dewpoint, pressure, winds, and others. Creating that dataset from ground based instruments doesn't seam very plausible. You'd have be able to sample the atmosphere in 3D from a satellite down to an amazing resolution. The technology to do that isn't there at all. So while I think we'll have the computing power to create extremely realistic models of supercells, we might not have the means to model existing supercells as they are happening for decades to come. In the meantime we'll probably only be able to make better guesses and when a tornado is going to form and when one is not going to form through what we can see on mesoscale models and doppler radars.
 
To be honest, we need to have several more tornadoes with the depth of documentation and sampling that the Goshen tornado did last year during VORTEX2. Unfortunately, the requirements to carry out these studies every chase season are not logistically or financially feasible. There may be "something" found when pouring through the data from last year, but then it also has to be confirmed from other storms that have the same kind of data sampling. One or two additional tornado studies will not produce a robust statistical sample to confirm a theory.

Given how long it takes it to organize and gather the resources for these field studies, I think it will probably take decades for us to find the answers. I think it is likely that a very good theory with a high success rate will probably come out of all of this, but I doubt that we will be able to prove it with 100% certainty.

In the end, a new tornadogenesis theory is not going to help me determine if Storm A on radar will produce a tornado rather than Storm B if there is no instrumentation around those storms that can feed me direct measurements of the near-storm and in-storm environments in real-time. It would be a tremendous resource to have available, and I hate being a nay-sayer, but I just don't see this situation ever happening.
 
So the technology is there/getting there. I saw Rasmussens talk, was definitely one to make you think. What is after Vortex II? Will there be a Vortex III? What are the overall goals by those in the research field? Are they going for the big fish or slowly working their way up to the prize? Funding probably will be an issue as long as tornado deaths continue to decline. More of a want vs need type of deal. Back in the 50's people needed to know how tornadoes would form because our knowledge of the beast was fairly new and hundreds were dying. Once semi-accurate forecasting and instrumentation was developed, the warning system improved and the focus turned from a "we need to know" how they develop to "we want to know" how they develop. As Skip says above there is a fairly concrete theory on tornado genesis... but the technology and sampling may not ever be conceivable. How do the rest of you feel about this? Do you think we can ever develop a real time mesoscale/microscale model and solve the age old question?
 
I forget who said this to me, but the person had taken part/was taking part in both VORTEX projects. We were talking about VORTEX I and II and he said:

"With VORTEX I we got answers, but were asking the wrong questions. It's likely in VORTEX II we will be asking the wrong questions again."

I think we have to learn how to ask the right questions if we are to make any further achievements in understanding tornadogenesis. What they are...I have no clue.
 
Advancements in computer hardware and parallel processing are still making tremendous bounds. In a few decades we will probably be able to run a microscale model of an existing supercell in realtime and see what its going to do next provided we have the current state of the atmosphere. The weak link is going to be getting the existing conditions into the simulation. You'd need a super dense grid of 3D data points, maybe down to inches per point, over a huge area, like several hundred square miles including all the necessary variables like temperature, dewpoint, pressure, winds, and others. Creating that dataset from ground based instruments doesn't seam very plausible. You'd have be able to sample the atmosphere in 3D from a satellite down to an amazing resolution. The technology to do that isn't there at all. So while I think we'll have the computing power to create extremely realistic models of supercells, we might not have the means to model existing supercells as they are happening for decades to come. In the meantime we'll probably only be able to make better guesses and when a tornado is going to form and when one is not going to form through what we can see on mesoscale models and doppler radars.

Bingo, Skip. That's where it all is. You can stop the discussion there, in my opinion.
 
I think we have to learn how to ask the right questions if we are to make any further achievements in understanding tornadogenesis. What they are...I have no clue.

This is a very good point Greg, and part of Erik Rasmussen's talk in Denver touched on this subject. He seemed to think that it would be better for future research to focus on what conditions disrupt tornadogenesis. There are so many variables that can prevent tornado formation, but many factors must come together at precisely the right time in order for a tornado to form.

If meteorologists were able to identify just one of the factors that prevent tornadogenesis, a specific storm could be identified as non-tornadic relatively early in its life cycle which could potentially decrease the false alarm rate. It should be easier to discover and identify one of the many factors that prevent tornadogenesis rather than trying to identify and someday forecast all of the factors that lead up to a tornado. Perhaps future research should focus on the "physics of the interfering mechanisms" as suggested in Rasmussen's presentation.

One way or the other, a lot of work still needs to be done, but I do feel that the "secrets" of tornadogenesis will be known someday.
 
Bingo, Skip. That's where it all is. You can stop the discussion there, in my opinion.

You could have the highest resolution you want but the algorithms and equation sets have to be correct to interpret the data.

We need to be able to predict when and where a storm will develop before we can predict a tornado. We still have a very long way to go.
 
This is why I am very skeptical of "warn on forecast." We simply do not understand enough about tornadoes to forecast tornadogenesis even if we had two orders of magnitude more data surrounding a supercell (which we are not likely to have any time soon).

I think we are kidding ourselves if we believe otherwise.

That said, I do hope Vortex II will help us achieve the understanding we need to advance our theoretical understanding so we can achieve warn on forecast with a high degree of reliability.
 
I found Erik Rasmussen's talk at Chasercon about tornadogenesis unknowns to be extremely interesting. You can see the slides here from the same (or similar) talk that was given at COD's severe weather symposium:
http://sws2009.cod.edu/

What I took away from the talk is that the computer models love to produce tornadoes. Given the right parameters, they always produce tornadoes. So the big question is why don't real life storms always produce tornadoes? We seam to have a decent grasp on why supercells produce tornadoes through the updraft's stretching of vorticity arches created by the RFD, the aggregation of eddies on the RFD gust front that build up under the updraft base, or tilting and stretching of streamwise vorticity. One or all of these processes look like good candidates for why we have tornadogenesis. Obviously something is missing then on all the storms where these processes could exist, but no tornado formed. Was the RFD not strong enough to create the vorticity arches or shear eddies? Maybe because it was too cold and dry and didn't have the bouyancy needed? Was the occlusion point between the RFD gustfront and the inflow not located in the right place so it couldn't be stretched by the updraft?

Advancements in computer hardware and parallel processing are still making tremendous bounds. In a few decades we will probably be able to run a microscale model of an existing supercell in realtime and see what its going to do next provided we have the current state of the atmosphere. The weak link is going to be getting the existing conditions into the simulation. You'd need a super dense grid of 3D data points, maybe down to inches per point, over a huge area, like several hundred square miles including all the necessary variables like temperature, dewpoint, pressure, winds, and others. Creating that dataset from ground based instruments doesn't seam very plausible. You'd have be able to sample the atmosphere in 3D from a satellite down to an amazing resolution. The technology to do that isn't there at all. So while I think we'll have the computing power to create extremely realistic models of supercells, we might not have the means to model existing supercells as they are happening for decades to come. In the meantime we'll probably only be able to make better guesses and when a tornado is going to form and when one is not going to form through what we can see on mesoscale models and doppler radars.

I haven't yet gone through Erik Rasmussen's talk (I will later), but one of my main areas of research is numerical simulation of supercell tornadogenesis (and tornadogenesis failure), and I have produced many simulations of supercells at high resolutions that in fact do not produce tornadoes (or very weak ones). I'm working on publishing the results soon. I think that the literature on this subject is biased toward tornado-producing supercell environments, which may explain Erik Rasumussen's observation.

Most of my simulated supercells that do not produce tornadoes, incidentally, have relatively cold and dry (low theta-e) RFD's. In general my results are pretty consistent with Markowski's observational work on the difference between tornadic and non-tornadic supercell RFD's. That is, warm RFD's tend to be associated with stronger, longer-lived tornadoes for a given environment, while cold RFD's are associated with weaker or non-existent tornadoes.

EDIT: By the way, I agree that one of the limiting factors is lack of data on these scales, but at the present time, model errors due to parameterizations of cloud and precipitation microphysics, turbulence, and other physical processes of which we have much to learn about are at least as important as the relative lack of data. This will likely change in the next decades as our models become more refined and parameterizations of these physical processes become more accurate. Fortunately, data assimilation techniques such as the Ensemble Kalman Filter which squeeze the most benefit out of limited data, are making great strides as well. The next several decades should be exciting indeed in this area.
 
This is why I am very skeptical of "warn on forecast." We simply do not understand enough about tornadoes to forecast tornadogenesis even if we had two orders of magnitude more data surrounding a supercell (which we are not likely to have any time soon).

I think we are kidding ourselves if we believe otherwise.

That said, I do hope Vortex II will help us achieve the understanding we need to advance our theoretical understanding so we can achieve warn on forecast with a high degree of reliability.

I'm working on a WOF-inspired project right now (ensemble prediction of the Greensburg storm and associated circulations), and while I agree that we have a lot to learn, you would be surprised at how much we can get even now out of our imperfect model and data assimilation schemes, especially when coming at it from an ensemble probabilistic approach (i.e. probability of a tornado within 25 miles of a point 1-2 hr in the future, as an example).
 
Hopefully some of the VII data that was gathered from last year's June 7th monster supercell in northwest Missouri will be of some assistance. That storm seemed to have everything going for it, and yet only produced a few brief, weak tornadoes. The only thing (IIRC) lacking was a more robust H8 component. Storms like these may prove just as useful in understanding how tornadogenesis occurs as the ones that go on to be prolific tornado producers.
 
This is why I am very skeptical of "warn on forecast." We simply do not understand enough about tornadoes to forecast tornadogenesis even if we had two orders of magnitude more data surrounding a supercell (which we are not likely to have any time soon).

I think we are kidding ourselves if we believe otherwise.

That said, I do hope Vortex II will help us achieve the understanding we need to advance our theoretical understanding so we can achieve warn on forecast with a high degree of reliability.
WoF is more than an end result; it is a process. If we were to assume for a moment that we never get to the point of being able to predict tornado developing ahead of time via a model, the advancements in understanding and data mining techniques from going down this path will be worth the effort.

I'm currently working on some high-resolution WRF ensemble visualization and data mining methods. Is this going to result in an accurate prediction of tornadogenesis hours ahead of time? Most likely not. However, the ensemble data mining / visualization techniques I'm working on will be applicable to the WoF problem and to forecasting as a whole.

WoF is a process; the process, in my opinion, will be more beneficial to the science and public than the end result.
 
Back
Top