Deep Thunder

It'll be a long time - we are probably reaching the point of diminishing returns on modeling until we get more data in.
 
Before sinking a lot of money into development, I think a lot of potential investors would like to see some verification studies and not just press releases and saccharin media articles.
 
Researchers are already running models with sub kilometer resolution...but in a very small region. There simply isn't enough computing power to do this over any sizeable region (i.e., bigger than an average state), and there won't be for a long time. I see what they're doing...they're basically running several nested grids to get the grid spacing down, but again, that's limiting the size over which they can run the model. Also, the skill of the model is limited by the model physics and the boundary condition data. It seems, however, what these guys are really looking for is a detailed forecast rather than an accurate one. What they're doing is fine for that then.
 
A huge problem (dovetailing off what rdale said) is the lack of data. The surface network in the US is pretty good and some mesonet stations are being QC'd and tied in to the data assimilation, but there are massive areas out over the ocean where we don't have any measurements. The drifting buoys and ship data helps a bit. Also the upper air conditions are poorly sampled. We don't have measurements of temperature, moisture, or wind data anywhere in the troposphere in Iowa, for example, except on the far west and far east border. That's a staggeringly huge chunk of air mass. Think about how many cubic kilometers we're talking about.

We do have a reliable guess of what's there by blending in previous model runs, but that sort of guesswork has its limitations. No matter how much power or physics are dumped into these models, that's the shortfall right there -- the model is basically running its own idealized picture of what the state of the atmosphere is, formed from lots of guesswork. We laugh when Picard says "computer, enhance", in the same vein, the measured data cannot just be conjured up or estimated from a coarse sample using sheer computer horsepower. The newer crop of models are great and the results are often impressive, but the limitations are real.

Tim
 
What is the weakest link ?

I'm a computer scientist / mathematician but I have
practically zero experience in any kind of WX modeling.

I thank you for your insights, but of course,
I now have more questions.

I have often wondered how accurate a retro-cast
might be if we tried on small-scale intense setups.

I remember (but I can't remember the exact date/time)
a high risk day several years ago where the high risk
was limited to southwest OK, and the entire risk area
was limited to eastern TX panhandle, Western OK, and
some of N TX.

The entire high risk area was covered by OK mesonet,
so I thought this might have been a good day to use
as a model test-bed. It is still a really big chunk of
real estate.

The first thing I though of when I saw the Deep Thunder
article was how it might perform on a data set like that.

I am interested in finding the areas where there is the
greatest need and/or possibility for improvement.

What do you think is the single weakest link in the current
(or future) modeling scenarios ?

What parts of the data sets are the worst ?
(Tim Marshall once said "Upper Air", --still true ?)
How bad is it ?

As far as hardware/software, are we really confident that
software is as good as it can be ?

For short term (Day 1) forecasting, how much of a problem is
machine epsilon / butterfly effect ?
I imagine that it is probably dwarfed by the data problem.

When we "idealize" a data set, do we fill in the gaps with
smooth transitions ? Or do we do regression and fluctuate
around the "means" with a sort of random "grainy" character ?

Any help would be appreciated.
Any good articles / books for someone who wants to get started
in the subject area of meso/micro-scale WX modeling ?

Thanks again,
Truman
 
What do you think is the single weakest link in the current
(or future) modeling scenarios ?

What parts of the data sets are the worst ?
(Tim Marshall once said "Upper Air", --still true ?)
How bad is it ?

For short term (Day 1) forecasting, how much of a problem is
machine epsilon / butterfly effect ?
I imagine that it is probably dwarfed by the data problem.

When we "idealize" a data set, do we fill in the gaps with
smooth transitions ? Or do we do regression and fluctuate
around the "means" with a sort of random "grainy" character ?

Any help would be appreciated.
Any good articles / books for someone who wants to get started
in the subject area of meso/micro-scale WX modeling ?

In my opinion, the spatial resolution of the availability of data will be the biggest factor to hinder significant improvements in small-scale NWP. The highest resolution data we have to put into a NWP model comes from a network like the Oklahoma mesonet, in which stations are probably 20-30 km apart on average. Yet we're trying to run models at kilometer scale resolution. There's at least an order of magnitude difference there. Non-surface-based data is even more sparse, as all we have is the NWS raob network, some number of airplane obs (quality?), and a small number of independent measurements or supplemental sounding sites.

To answer your question about idealizing data sets, both types of assimilation are being researched. Current operational models generally use fixed covariance data assimilation schemes that generally cause assimilated data to be assimilated smoothly and evenly between all data points, but assimilation schemes using flow-dependent covariances exist and can be used. They are more computationally expensive, however, and you aren't likely to see them in widespread use anytime soon.

This is not to say that there is no hope in numerical weather prediction at small scales. Running models on the kilometer scale offers an immense amount of detail and realism that can't be obtained from lower-resolution models. This realism is very helpful in forecasting some general characteristics of the weather rather than specifics. For example, a 2 km model may initiate storms along a dryline at 2115Z, but the actual storms may develop at 2145Z and with distance errors of several tens of kilometers. Compared to the size and length scales of the storms, those errors are terrible! But compared to the applicable scales for a NWS forecaster, those are acceptable since the forecaster was given information on the 1) storm mode, 2) approximate initiation time, 3) approximate number of cells, 4) approximate orientation of lines of cells, and 5) evolution of the cells (either persistent discrete storms or building upscale into an MCS), which is all very helpful despite the lack of true accuracy. We're really nudging up against the asymptotic limit of predictability once we're down to kilometer scale, so it would be unrealistic to expect smaller errors than those described in the example above. We need to improve the data used to feed the model as well as the model physics (in particular at kilometer scales: microphysics, PBL, land surface, and radiation schemes).
 
compared to the applicable scales for a NWS forecaster, those are acceptable since the forecaster was given information on the 1) storm mode, 2) approximate initiation time, 3) approximate number of cells, 4) approximate orientation of lines of cells, and 5) evolution of the cells

That's one awesome thing about operational meteorology... this is one field where humans can't be replaced, and ideally can work side by side with technology. That is, unless mankind achieves technical singularity, develops omniscient knowledge of the atmosphere from some future remote sensing technology, or clueless executives/directors forge ahead with their own reasons for replacing forecasters with numerical models.

Tim
 
Back
Top