Machine shows promise in precise thunderstorm forecasting

Joined
Jul 3, 2004
Messages
164
Location
Hotel room somewhere by an airport
By Byron Spice, Pittsburgh Post-Gazette, July 06, 2005

Meteorologists running an experimental forecasting model, powered by the computing muscle of the Pittsburgh Supercomputing Center, this spring were able to predict thunderstorms in unprecedented detail 24 hours in advance.

In some cases, the forecast model would predict storms to within 20 miles of their actual location and to within 30 minutes of their actual time.

Just as important, the computerized forecasts produced images "that looked very similar to what we see on radar," said Steven Weiss, science operations officer at the National Weather Service's Storm Prediction Center in Norman, Okla.

Seeing the structure of the predicted storm, he explained, is important in determining whether a storm is likely to produce tornadoes, hail or dangerous winds.

"It was an eye opener in many respects," he said of the experiment.

The forecasts were produced by the Center for Analysis and Prediction of Storms at the University of Oklahoma, which has been working with the Pittsburgh Supercomputing Center for almost two decades.

more:
http://www.post-gazette.com/pg/05187/533375.stm
 
Wow, now that would be interesting. If a computer model could be that detailed even within 24 hours would be a huge step for forecasts and chasers. But, I'm sure there is a lot more that has to go into this model before it is finally approved by any forecaster...
 
I'm assuming they are talking about WRF (Wx research and forecasting model): www.wrf-model.org What the article doesn't tell you is how it totally blew a few events too ;). That said, I did notice it did quite well on several events (high risk - too little cap/squall-o-rama) rings a bell.

Aaron
 
A version of the WRF with 4 km resolution was ran in support of BAMEX in 2003, and it often produced very realistic reflectivity structures, and often in the correct state, but as Aaron pointed out, it also totally screwed the pooch on some things as well.
 
We've used the WRF model pretty heavily this Spring. The modelers have done an exceptional job at improving the physics over the last two years (and it's far superior to the version we had during BAMEX).

The good news is that the model tends to do an excellent job at forecasting convective mode (bow echos, supercells, MCSs, etc..) and usually nails the "obvious" events. This is to be expected since the model explicity derives convection rather than having convection parameterized.

However, it continues to struggle forecasting and even initializing several widespread events for unknown reasons, or could be off several hundred miles in location. The problem is that we don't know when/why this occurs, simply because the model is too new to have an established set of biases/flaw cases.

For now, I consider it just another tool, but it's showing increasing promise. The one catch 22 is that due to the extensive physics involved, computing power (as great as it is), still falls well short in running this model with any inkling of speed. As a result, it runs too slowly to be useful where it could really have a significant impact (1-6 hour time frame).

Evan
 
The 4km horizontal resolution WRF was run this spring and through this summer, once again experimentally. Running a NWP at this resolution allows explicit convection equations to be run... You can't do this at any lower resolution as you would be removed from storm scale dynamics that are so important to convective mode and evolution. So, modelers are forced to "parameterize" convection by various schemes (BMJ, Kain-Fritch, Tiedke, etc.) in order to predict the crucially important scale interactions that convective processes produce, (due to massive latent heat release, cloud cover, etc etc).

I may be wrong, but the 4km WRF I believe is the first model to be run over such a large domain invoking "explict" physics for convection. You can really see how convective systems can evolve in time and space using the explicit equations that govern their structures and evolution. Of course, since it is a *prediction*, you run into the same errors as you would any model, and since the WRF is a limited area model, it must be initialized at its boundaries (currently, the Eta).

An archive of 4km WRF predictions is available where you can see for yourself the performance of the model out to 36hrs.

From what I have seen so far, there is certainly room for improvement, however there are several cases that it certainly nailed. It did quite well with the South Dakota derecho event on 6/7. It showed supercell signals at H+24 in west Texas northeast of Lubbock on 6/12 (Kent Co. day). It seemed to miss the boat on the 6/9 western Kansas tornado outbreak, with the MCS centered in Nebraska and only a slight hint at convection extending into far northern KS.

In a nutshell, it is of great added value when dealing with the horrendously difficult prediction of convective initiation/mode.

Mike U
 
Just for clarity, the run described in the article was at 2 km horizontal resolution for the entire US. To my knowledge, this was the highest resolution ever for an operational model on the scale of the entire CONUS. I have not personally looked in detail at the performance comparisons between these runs and the 4 km runs being performed at NCAR during the spring and summer described above, but have heard through the grapevine that the doubling of resolution didn't buy much in terms of better performance with the model configuration used for the experiment. This is encouraging given the marked increase in computation needed to perform 2 km vs. 4 km runs (~8 times more computations).

Glen
 
Dr. Morris Weisman and his WRF group at NCAR have done a terrific job of creating a model that actually captures the character of the convection quite well - as Evan mentioned. Extensive detail to the ambient shear and instabillity is to thank for this breakthrough. Nowcasting pioneer at NCAR, Jim Wilson, has been analyzing the performance of the 4-km WRF, RUC and MM5 in detail to decide which will be best to incorporate into the NCWF-6 nowcasting software package. He has found that the 4-km WRF handles the convective mode exceptionally well and the timing is the best as well. However, it still has problems with placement of convective features and a higher false alarm and hit rate than one would like to see. Once the WRF is run more often than once a day, it shows definite potential to be the most accurate NWP model available to forecasters and researchers.

Links to Summaries of the 2003 BAMEX WRF Project
http://box.mmm.ucar.edu/individual/weisman/

Brief Introduction to the NCWF-6
http://www.rap.ucar.edu/research/thunderstorm.html

Note: The NCWF-6 is a 6 hour forecast of the initiation, decay and growth of convection. The introduction of NWP is necessary since it fairs better than extrapolation after the 4-hr timeframe.
 
Back
Top