• Stormtrack's forum runs on Xenforo forum software, which will be undergoing a major update the evening of Wednesday, Feb 28th. The site may be down for a period while that update takes place.

Numerical modeling of convective storms (tangent to the 03/01/07 forecast thread)

This thread is in regards to a numerical model run mentioned by Justin E. Reed for the upcoming event this Thursday in the eastern U.S. I wanted to make some general comments about numerical modeling of storms in general in reply to his posts.

I guess I'm just skeptical, being a storm (numerical) modeler myself, as to the ability of any current-generation numerical model to accurately predict storm severity. We do appear to have some skill in predicting storm mode with some of the newer high-resolution operational models, but we have a *long* way to go before we can start accurately predicting things like mesocyclone strength, hail size, or likelihood of tornadoes in individual storms, etc.

Similarly, it will be a while before we can *consistently* start predicting even a few hours in advance the location, motion, and behavior of individual storm cells, and some say that this may even be impossible, using predictability arguments from chaos theory. I myself think that some limited predictability on short time scales of individual storms is possible, as we already can do that in certain cases, but I'd say we have a couple decades to go at least before this sort of prediction becomes viable on a day-to-day basis.
 
Yes, I definitely agree that ensemble forecasting shows great promise. For example, I think it will become possible in the next couple of decades to have something like a convection-resolving ensemble-based operational NWP system that predicts likely areas of storm initiation, propagation, and mode with fidelity on the scale of a couple of counties or so. We may not be able to predict consistently that a supercell will, say, initiate 20 miles NW of Lawton in 2 hours and move at 30 mph to the NE, producing a tornado near Chickasha in 4 hours, but we may be able to predict, based on ensembles, that a supercell is *likely* to initiate in a given area within an hour or so, and even how likely the storm would be to produce strong surface winds and large hail over a certain region, and maybe even the likelihood of it being tornadic. The prediction would almost certainly have to be probability based, similar to the current SPC outlook probabilities, but on a much smaller scale in both time and space.

I myself haven't worked much at all with the ensemble approach. My interests lie mostly in getting our models to produce realistic storm structures to begin with, and I have something of a mild deterministic bias ;). That said, some of my and many others' (mostly others :) ) work seems to suggest that with good initialization of the model with various sources of data (including radar data), and with sophisticated-enough parameterizations of microphysics in particular, even more deterministic (i.e. a single model run) prediction of individual storms (particularly if they are well-organized supercell storms) is not totally out of the question.

There are many interesting questions that stem from this. To throw out just a few: are supercells inherently more predictable from an NWP standpoint, than, say, an ordinary or multicell thunderstorm (they seem to be)? If so, why? Are MCS's more predictable than supercells (I would say that the answer to this is probably yes). How do we design future observing systems, particularly radar systems, to maximize our ability to analyze relevant atmospheric features on the storm-scale, for use in initializing storm-resolving NWP models? Radars do a good job observing a limited set of variables, mainly reflectivity and radial velocity, but leave it to us to figure out how to extract information about other crucial state variables (temperature, pressure, and the other wind components, to name a few). This is an ongoing, very active, and exciting area of research right now!
 
Back
Top