Forecasting and Technology

Joined
Oct 24, 2007
Messages
32
Location
Garland, Texas
The other thread regarding the SPC forecasts got me thinking about how technology has evolved just in this aspect of meteorology. Instead of hijacking the SPC thread, I decided to start a separate thread.

Outside of taking basic meteorology courses back in the 1980's, I'm a novice when it comes down to weather forecasting.

Is it possible that there is too much data available and that many meteorologists depend too much on model output?

I know that it takes a lot of hard work and dedication to become a meteorologist/forecaster. As technology has evolved, there is now so much more data available at One's fingertips. When I go into the various data sites and look at forecast model output, I'm amazed at how far things have come. Undoubtedly, this technology will only continue to improve forecasting accuracy.

I went back through some old videos I have of Harold Taft and Scott Chesner who were meteorologists in the Dallas/Ft. Worth area. I would start recording their forecasts several days before a forecasted severe weather event, the day of, and the day after. I know the data that they had access to in the 1980's is no where near to what is now available. Both of these meteorologists were quite accurate with there forecasts during the years they were in this area. There were many occasions when they would indicate that the computer models were predicting one thing, yet they went with a different forecast according to their knowledge/experience.

I understand that many years of experience will usually make a forecaster better. Harold Taft was an exceptional meteorologist, but how was he able to be as accurate if not more so than many forecaster's are today? Tom Skilling is another meteorologist that comes to mind. Both of these guy's were very confident most of the time when presenting there forecast's.

I only watch TV meteorologists on occasion now, especially with the availability of data from the NWS/SPC.

I really like reading through the area forecast discussion from the NWS and will usually read it every time a new discussion is issued.

What I have noticed is that the forecast discussion will reveal a lot of doubt/uncertainty from the forecaster, usually due to the various model's not in agreement. I have seen this many times over the last 10 or so years. The forecaster will then only use the model that has been the most accurate given a situation. How many times have we seen all of the models been in agreement, the forecaster is very confident and yet the forecast ends up being wrong.

Trying to forecast the weather is so challenging due to multitude of variables present at each level of the atmosphere. In no way am I trying to be critical of any person who forecasts the weather. I know that many of you already posses knowledge that I will only scratch the surface of in my lifetime. My hat goes off to each and everyone in the field of forecasting. I have learned more than I thought possible over the last 10 years or so just by having access to Stormtrack.

I know that there is no guarantee given for any forecast. I would like to get some insight or opinion from anyone, particularly the resident meteorologists regarding how they come up with a forecast. I'm sure that some of the forecasters from back in the day had doubts in their forecasts. We just did not have access to their reasoning for a specific forecast.

What method do you use to make a forecast and how confident are you that it will be accurate? Do you strictly use only the models or do you go by your experience/gut instinct for a forecast? I think that most of the reply's will be a combination of both factors? Do any of you feel that you have data overload when forecasting? Has there been a situation where all if the models agreed on a forecast, yet you went against the grain?

I apologize for the long post and being somewhat vague with my questions and thoughts. I look forward to reading any of the responses that this will generate.

Regards, David
 
Is it possible that there is too much data available and that many meteorologists depend too much on model output?

There will never bee too much data for the science of meteorology, but human forecasters can certainly be overwhelmed by the huge amount of data that is available nowadays.

The days of the human forecaster are on the way out, if not already finished. The technology available to NWP models is now so good that NWP models can far exceed humans in forecasting accuracy from a numerical standpoint (i.e., improved ETSs and biases). Humans are mainly only good for interpreting model forecasts and making very complex decisions that require emotional consideration. Humans also currently remain superior in compositing/combining different ingredients together to come up with a large-scale, general forecast. For example, a forecaster sees a synoptic-scale trough moving into the central US with ample moisture coming off the Gulf and can predict a severe weather episode will occur in a general area of the central US. The NWP forecast makes no assumptions on severe weather but is very specific about the locations to be impacted if it predicts severe weather.

David Conaway said:
Both of these meteorologists were quite accurate with there forecasts during the years they were in this area. There were many occasions when they would indicate that the computer models were predicting one thing, yet they went with a different forecast according to their knowledge/experience.

I understand that many years of experience will usually make a forecaster better. Harold Taft was an exceptional meteorologist, but how was he able to be as accurate if not more so than many forecaster's are today? Tom Skilling is another meteorologist that comes to mind. Both of these guy's were very confident most of the time when presenting there forecast's.

Back in the day, the available model data was based on fixed configurations of the model that did not change nearly as rapidly as they are now. So forecasters had more time to discover and understand model biases and use this information in their forecasts. Models are more frequently updated today and there are so many models available that a given forecaster probably doesn't use a given version of a model long enough to develop as thorough an understanding of how it works before it changes. Many forecasters also move from office to office and don't have time to understand the oddities of the mesoscale/microscale weather in their forecast area.

David Conaway said:
What I have noticed is that the forecast discussion will reveal a lot of doubt/uncertainty from the forecaster, usually due to the various model's not in agreement. I have seen this many times over the last 10 or so years. The forecaster will then only use the model that has been the most accurate given a situation. How many times have we seen all of the models been in agreement, the forecaster is very confident and yet the forecast ends up being wrong.

Some of the models in use today are so complicated that I bet many forecasters don't fully understand what each model is doing. This is especially the case now that storm-scale models are coming out that don't use convective parameterization (convective parameterization schemes are generally very well understood, but most complex cloud microphysics schemes are not well understood because they're so complex). Also, forecasters can see features in those models that didn't used to be resolved, such as surface boundaries (not just fronts and drylines and OFBs, but also HCRs and other seemingly insignificant convergence lines). So some of the doubt and uncertainty may be due to a lack of understanding of why the model is saying what it's saying. Your comment also serves as a segue to probabilistic forecasting, which is becoming more useful than ever before. Since so many models are available, it's frequently better to consider the consensus among the models rather than to pick a specific member and put all your "money" on it. So "uncertainty" is really a technical term referring to the degree of similarity among different model solutions. It is the result of the atmosphere being non-linear and basically chaotic.
 
One issue with being an operational forecaster is the increasing amount of detailed information which becomes available as the time an 'event' might occur approaches.

I'll give you an example from the UK: a few winters ago, medium term guidance, such as the ECMWF model, hinted at an active shortwave trough crossing central and southern parts of the UK in 6 days time, bringing an area of reasonable (for southern England!) snowfall. Fast-forward to 48 hours away from the event, when short-term mesoscale guidance started to become available, more (rather than less) uncertainty started to creep into the forecast - some models showed 1-2cms of snow; others showed 15-20cms of snow in some areas. Thus, for the operational forecaster trying to give guidance to customers, the range of possibilities has seemingly increased, despite being closer to the time and having more data to peruse.

In the event, taking a step back was more appropriate: yes, a shortwave trough was still expected, and actually was still expected to cross a similar area to that originally depicted; the very high amounts of snow shown on mesoscale models was unlikely to be realised (as such models are prone to over-estimating precip over a wide area, even if that amount might fall locally) - thus, forecasters, whilst looking through all the available data, should try to focus in on the important parts - obviously, discarding any guidance needs to be thoroughly justified.
 
Thank you Jeff and Paul for responding. This gives me a better understanding with what the operational meteorologist has to deal with.

Regards, David
 
*caveat: this is my lone opinion and in no way represents the position of my employer*

David, I think you asked an important question, even if it is one that has frequently been asked over the last few decades. I have 11 yrs with the NWS and another 5-6 in other roles, so ill give you my 3 pennies.

I think Jeff gave an excellent response, and I agree with most of it. I think he is correct that human forecasts will eventually be replaced by the models, but I think he overstates the current technology. The advances in physics/microphsics and even the advances in modeling are slow IMO. My perception (which doesn't necessarily agree with the objective increases in accuracy) is that I havent seen a HUGE increase in accuracy over the past 15+ years, although the models are definitely improving.

I do think we as operational forecasters tend to overly on model output. Part of the reason, as Jeff hinted at, is the disconnect operational forecasters have with the models and how they handle parameterization, or even more basic design elements like how they are computing (I venture a guess that half the NWS forecasters couldnt even tell you which models are spectral etc) or how they handle terrain etc. Storm scale models might avoid issues with parameterization, but they still depend on an accurate intitalization--and garbage in, garbage out. I think many forecasters have fallen prey to the (false) idea that greater resolution equals greater accuracy. It does not. In fact Ive seen studies of specific models that show just the opposite--in some cases greater resolution leads to more innacuracy. Yet forecasters, both pros and amateurs, latch on to their favorite pet model because it shows super believable cells and virtual reality. They defend them because one time the model is spot-on (ignoring the other ten times the model has been completely out to lunch). I think it is unfortunate that the number of high-res models keeps growing, while you will be hard pressed to find any public studies that are really putting their results to the test (verification).
As Jeff mentioned, ensembles are clearly a good way to go, but they're not a magic bullet. Ensemble solutions can be as fickle as the single members. You can then go to ensemble of ensembles...
Anectodally, I can tell your there have been several ocassions over the past couple years that I can recall doing an in depth analysis of the various model solutions, pondering their biases, using my knowlege of meteorology to focus on the most probable solution etc--then just as you mention in your post, I end up totally wrong. In retrospect, I realize that if I had just obs, some real crude plots of overall pattern, and my knowledge of 1) Pattern Recognition and 2) Conceptual model, I would have come up with a much better forecast. And those two things have not been replaced by the models, not yet anyways. So I don't think we are as gone as Jeff might suggest.
 
FYI, I am not an operational meteorologist, as I do not get paid to forecast. I was giving my opinion as a researcher who still pays attention to the forecast process.

Stan said it better than I did that humans are better at pattern recognition and fitting the real atmosphere to the conceptual model better than computers. That's the one big advantage that will never go away.
 
I think, to some degree, there are regional differences in what works best. For example, in the interior of the USA models can struggle with convection, and how it then affects the situation in the succeeding timeframes. Here in the UK, when we have a powerful westerly flow models do a very good job of handling weather systems, and you'd be hard pushed, at times, to better them.

However, switch to a more unusual pattern (e.g. this coming week will see a large blocking high developing to our east) and models start to struggle with individual rain 'events', especially in the warm season, when convection (or not!) develops over the near continent.

As a forecast it's recognising when models have a good handle and when they may not, which develops with experience.
 
Back
Top