Tornado forecasting mesoscale models?

Joined
Sep 25, 2006
Messages
156
This thread is aimed for the more technical.

I wanted to see if anyone else might see this in the future. I havent heard anything on it, ever. But I definitly see some potential in the future. For a mesoscale high resolution model based solely on forecasting tornadoes and maybe supercell scale related threats.

Of course this would be many years down the road when we understand them in more detail. And of course when processing power improves. I can see maybe more of a limited domain, i.e. maybe state by state. Or even smaller. I am not sure exactly what it would take.

But what are your thoughts? I wanted to throw this out there as kinda food for thought. Anyone else think this is a likely possiblity?
 
I'll chime in since I have a little experience with modeling (http://stormtrack.org/forum/showthread.php?t=19509&highlight=mesoscale+model). While I may not know as much about operational weather forecasting models, my gut instinct is this may still be a far way off. I believe at present the smallest scale resolution for forecasting models is 4km (correct me if wrong). Even at this scale you need to parameterize convection. Remember your processing speed typically will need to go up r^3 as the resolution gets better. So even if you make the grid as small as a state you're gonna have some computational issues. However, if it's not meant to be a forecasting tool then have been models that make tornadoes from historical tornadic conditions, but I suspect the time it runs the model would be substancially longer than for the event to unfold. Again, people can correct me if I'm wrong.

On another note, I believe problems I was having with my model was the result of the equation set I was using. My suspicion is that tornadic models would need to use a different equation set than what is presently used. I believe the equation set currently used does not accurately model the transfer of energy. A paper I read suggested an equation set that didn't use potential temperature, a change from present models.
 
Last edited by a moderator:
While I think it would be beneficial for models to be able to directly predict tornadoes in the future, I think a model whos only purpose is to forecast tornadoes and other supercell related threats would probably be a waste of computational expenses. There's many other weather related issues out there on the mesoscale and smaller that would need to be forecasted, and the model would have to account for all of that. Given the resolution requirements to model something on that small of a scale (we're talking on the order of 10s to 100s of meters), you'd have to run models at 1-10m in order to actually model tornadoes and/or supercells. Operationally speaking, that is simply too much right now for reasonable computing of model runs. You could model it by running on a very small domain, but how small, and how would you know which area to center it over? If you're trying to use it to forecast tornadoes, then you'd already have to be making forecasts just to tell where to center the model domain.
 
There are cloud resolving models that do not use convection parameterization. I have personally seen models run at 0.5 km. However such high resolution models are only as good as the larger domain accuracy. I would think that they would be best used once convection has actually developed to determine what mode the convection might follow. I have also seen simulations at resolutions of 100 meters however I am not sure how well we understand the dynamics of tornado-genesis so these seem like works in progress. Here is a link some pretty high resolution simulations of supercells and tornadoes. Its pretty cool how detailed and realistic these are:

http://serc.carleton.edu/NAGTWorkshops/health/visualizations/tornados.html

I'll chime in since I have a little experience with modeling (http://stormtrack.org/forum/showthread.php?t=19509&highlight=mesoscale+model). While I may not know as much about operational weather forecasting models, my gut instinct is this may still be a far way off. I believe at present the smallest scale resolution for forecasting models is 4km (correct me if wrong). Even at this scale you need to parameterize convection. Remember your processing speed typically will need to go up r^3 as the resolution gets better. So even if you make the grid as small as a state you're gonna have some computational issues. However, if it's not meant to be a forecasting tool then have been models that make tornadoes from historical tornadic conditions, but I suspect the time it runs the model would be substancially longer than for the event to unfold. Again, people can correct me if I'm wrong.

On another note, I believe problems I was having with my model was the result of the equation set I was using. My suspicion is that tornadic models would need to use a different equation set than what is presently used. I believe the equation set currently used does not accurately model the transfer of energy. A paper I read suggested an equation set that didn't use potential temperature, a change from present models.
 
I know of models with smaller grid sizes (look at what I made, and there are some models that can be pushed to sub-meter grid sizes), however is the time to compute a time step take longer than the size of the time step for these models? If so I don't think you could use these for forecasting. If not, that's pretty sweet.

There are cloud resolving models that do not use convection parameterization. I have personally seen models run at 0.5 km. However such high resolution models are only as good as the larger domain accuracy. I would think that they would be best used once convection has actually developed to determine what mode the convection might follow. I have also seen simulations at resolutions of 100 meters however I am not sure how well we understand the dynamics of tornado-genesis so these seem like works in progress. Here is a link some pretty high resolution simulations of supercells and tornadoes. Its pretty cool how detailed and realistic these are:

http://serc.carleton.edu/NAGTWorkshops/health/visualizations/tornados.html
 
Last edited by a moderator:
What I've been wondering is why there's no atmospheric super computer that can run models based on current planetary conditions. In reality it should be possible to run a fairly accurate virtual Earth. Heck there probably is already one but I never have heard of it. I would think this could predict supercells and even tornadoes. If all the possible calculations were done with all the possible data. Ok maybe I'm just dreaming but I don't think it's that far fetched.
 
What I've been wondering is why there's no atmospheric super computer that can run models based on current planetary conditions. In reality it should be possible to run a fairly accurate virtual Earth. Heck there probably is already one but I never have heard of it. I would think this could predict supercells and even tornadoes. If all the possible calculations were done with all the possible data. Ok maybe I'm just dreaming but I don't think it's that far fetched.

The GFS is a global model, as is the ECMWF and the GEM (though you'll see it in regional configuration). However, if you want to get into the supercell-forecasting range, assuming you're talking about explicitly modeling supercells, then you need to run the model at a grid spacing smaller than 4 km, and you'd want something well under 50 m to get into the tornado modeling range (perhaps you can model large tornadoes with a little larger grid spacing, but you'd only barely be able to capture it). Running a model at, for example, 1 km (or 10 m, for tornadoes) grid spacing over the entire globe would require a tremendous amount of computer resources, particularly since you'd want to see the output from the model in relatively short time if you want to use it in operational forecasting. Right now, the supercomputers used to run the NAM and GFS (and other models at NCEP) must divide up the time to run each model (since, after all, there are only so many computing resources). The fact of the matter is that you need a lot of computing resources dedicated solely to that one model for a long time, which is something that requires a lot of money (and time).

CAPS did have a 1 km, convection-allowing WRF run this spring in support of VORTEX 2. Otherwise, 3 or 4 km WRF runs have been available online from various entities in the past couple of years. Anecdotally, I've seen "supercell structures" in the 4 km runs, but they become a bit more realistic-looking at higher resolutions. I'm not terribly well-informed in the latest studies re: the effect of model resolution on convective-scale structures, however.
 
I know there are ongoing experiments with this kind of modeling for us in warning operations. It's being called "Warn On Forecast." If I remember right, the model basically goes out for 1-2 hours. However, the model takes 3-4 hours to complete a run, and this is for a very small domain area, we're talking like CWA-size or less.
 
The few challenges with this are:

- In order to increase your resolution (go from 4km down to 1km or less) you drastically increase your processing time so you need either an insane supercomputer, or you need to make your domain smaller.

- When you make your domain smaller (like the size of a state) you really limit how far out into the future you can accurately forecast because you're not moving air through a big enough domain in space/time. I'm not sure if I'm explaining it well... but in order to be really accurate, you need a large domain, despite how high resolution the model may be.

A good example of this is the GFS. GFS is a really poor resolution model, but it does quite well because it's covering a large domain (the whole world). So it can have atmospheric conditions traverse throughout the model's space and time well.

So the trick is to have a large domain with high resolution. Currently computing power pretty much limits us to what the NSSL ARW 4km WRF-ARW is doing. Although you could argue that domain should be larger yet. I feel like you may have better results if they bumped it back to 5km and went with a larger domain.

That would also probably give us a longer view into the future with more accuracy. Again, I refer to GFS and how it can handle jet structure pretty decently out a couple weeks in time.

- Another big factor I see is that tornadoes are DEPENDENT on 'all the other stuff'. So I wouldn't have a model just forecasting tornadoes... it doesn't make sense because generally you need a 'severe' storm (significant updraft, hail) to help with tornadogenesis. So it all goes hand in hand and its all interdependent.

I think what you may be looking for more realistically is some sort of TVS type parameter that can identify storm structure in the high res precip output, put that together with some good index that identifies a tornadic environment and then plot a tornadic symbol on the reflectivity the model spits out.
 
A recent article (Kain et al. 2008) describes a method by which the 4 km WRF-NMM has attempted to forecast supercells. See the product called Updraft Helicity at this link: http://www.emc.ncep.noaa.gov/mmb/mpyle/cent4km/conus/00/.

Values of UH of > 50 m2/s2 were used as a "mesocyclone indicator" in the model runs performed in that article. There was some success with it.

As a grad student who will be spending tons of time making WRF runs (I'm in the learning stages now), I have seen runs that I did with supercells in them, and on simulated reflectivity, the overall structure of supercells was apparent at 4 km, although the cells were a little larger than you would observe in nature. That is likely due to the grid spacing.
 
The current problem with real-time mesoscale modeling is not (necessarily) processing speed, or the time involved depending on your domain set-up, though that does have an impact.

The problem is the ever-present GIGO problem. Garbage-In-Garbage-Out. Unless we are able to resolve the initial conditions with high resolution accuracy, mesoscale modeling will never be able to provide any level of accuracy.

It's just precision versus accuracy.

The models will be incredibly increasingly precise. However, their accuracy will never be any good if we don't have observations that are also high-resolution. This includes spatial resolution (horizontally and vertically) and temporal observation.

Several years ago even, we were able to run MM5 and WRF models at a 4 km range on out-of-the-box basic PC computers, running linux. It's fairly easy to do. They are only getting better and this technology appears to be accelarating. This is great news for forecasting!

The problem is our remote sensing of the atmosphere. Mesonets are great for ingesting into models, but mesonets are few, far-between, expensive and (unless well-maintained) subject to failure and error. Also, we all know how few-and far-betweem upper air sites are. There has been some compensation with aircraft data from sensors on commercial airliners and other remote sensing data, but we're still missing a lot of sampling of the atmosphere.

Forecasting accuracy will never get any better unless we have the observations to understand the current conditions that go into those models.

The problem? Money. Those sites, sensors, data transmission etc is expensive. I don't know if there is a solution currently. It will become cheaper eventually to put in new sites, sensors etc. But it will never meet the demands of the models that will use that data.

Just something to think about, and to appreciate the agencies and technicians that install and maintain the mesonets, AWOS/ASOS, UA etc.

Robb
 
Rob,

Excellent summary. You can't forecast consistently well for a scale finer than we observe the atmosphere.

Second, since we don't know the process of tornado spinup (hopefully, Vortex II will help), can't model the process directly.

Mike
 
Back
Top