• A friendly and periodic reminder of the rules we use for fostering high SNR and quality conversation and interaction at Stormtrack: Forum rules

    P.S. - Nothing specific happened to prompt this message! No one is in trouble, there are no flame wars in effect, nor any inappropriate conversation ongoing. This is being posted sitewide as a casual refresher.

State of the Chase Season 2024

My 2¢...

As we move deeper into tornado season 2024, I'm sure everyone wants to make the best forecast possible. So, I'm not certain why so many refer to the GFS. Dr Ryan Maue posted these today and they show just how awful the GFS is -- at times at 1980's levels (really).

In order, the verification stats are better for the ECMWF, UKMET and Canadian than the GFS. I've gotten to the point where I don't even bother to look at the GFS.

I don't have a feel for the AI-ECMWF but I would probably approach it with caution until we have a couple of months under our belts to get a feel for it. In theory, AI in meteorology has a lot of promise but I think we are probably still in the "theory" time period.

Finally, I wish to quote Dr Tom Stewart, then of SUNYA Albany, who did the first-ever studies of human factors in meteorology. Those studies in aviation have revolutionized safety and cockpit order. He found, "meteorologists love more models because they make them feel more confident, but more models do not make them more accurate."
 

Attachments

  • 0.png
    0.png
    644.2 KB · Views: 18
FWIW I check the GFS while the other publicly-available models are out of range. Once a pattern stops flickering in-and-out-of existence I start paying more attention. For example, on 3/7 an alert popped up on my phone, "Possible Panhandle Chase". I don't recall when I logged that entry, since by 3/7 the event had shifted well east of the Panhandle, but there must have been some run-to-run consistency when I did. Other than that...no.
 
Moisture issues are obviously not unheard of this time of year, and I've already been surprised multiple times just in the last two years by how low of a dewpoint in which significant tornadoes can occur (low 50s for Winterset 2022, mid-to-upper[at best] 40s for Evansville, WI last month). Just one of those flies in the ointment that will go a long way toward determining the ceiling of the setup.
I've always wanted to see a climatology of HP Vs. LP Zones across the plains. I can operationalize my targets to that effect, but I don't know if there was any long-term analysis on it.
 
My 2¢...

As we move deeper into tornado season 2024, I'm sure everyone wants to make the best forecast possible. So, I'm not certain why so many refer to the GFS. Dr Ryan Maue posted these today and they show just how awful the GFS is -- at times at 1980's levels (really).

In order, the verification stats are better for the ECMWF, UKMET and Canadian than the GFS. I've gotten to the point where I don't even bother to look at the GFS.

I don't have a feel for the AI-ECMWF but I would probably approach it with caution until we have a couple of months under our belts to get a feel for it. In theory, AI in meteorology has a lot of promise but I think we are probably still in the "theory" time period.

Finally, I wish to quote Dr Tom Stewart, then of SUNYA Albany, who did the first-ever studies of human factors in meteorology. Those studies in aviation have revolutionized safety and cockpit order. He found, "meteorologists love more models because they make them feel more confident, but more models do not make them more accurate."
Help me understand this better, at what percentage does "skill" fall below a threshold of accuracy that creates "distrust". I read in some forums that below .80 was considered the beginning of distrust due to accuracy issues but I am not entirely sure on that. What I am wondering in general terms is, is there some kind of logarithmic shift in error when a model falls below .80 .75?. So, when/if accuracy is being measured mostly by 500mb anomalies, if a models skill is say .75 on heights, it leads to a surface pressure being off by X Hpa or mb (I think I understand the concept?), I just don't see where I can visualize it on a chart.

To add into this: on the topic of distrust. When we are harping on inaccuracy, what lens are we seeing it through.
- Scientific distrust (Very High Standard)
- Operational Distrust (High Standard)
- General use Distrust (Medium to low Standard)

probably some real lengthy answers could emerge here, and happy to take that to another feed if needed.
 
Jason,

Because I am in the middle of some other projects, a couple of quick thoughts:
  • 500mb is an "easier" forecast than surface features. So, a poor 500mb forecast will likely lead to a very poor surface forecast. Thus, my mentioning of this in the context of storm chasing. When the 500mb forecast is in the .70's you can pretty well count on a lousy surface forecast.
  • In "ancient times" (intern dinosaurs brought the model output to us) in numerical weather forecasting, we had two models: the barotropic (always too far to the left) and baroclinic (usually too far to the right). They went out just 36 hours. If you averaged the two, you usually had a pretty good 500mb forecast. But, because they were new, in the early 1970's, we had to figure this out. I'm making this point because, as with CAMS about ten years ago, in 2024 we are entering a new period of complexity with trying to figure out AI-powered tools, especially since -- as last week demonstrates -- we haven't completely figured out the CAMS.
  • Speaking of CAMS, when Pivotal Weather started, there were three convection-allowing models. Now, there are ten. I am highly skeptical that more CAMS will lead to better forecasts.
  • Added this thought: I, of course, look at the SPC convective outlooks and NADOCAST. I find the latter to be surprisingly good.
Most of the time, I use the ECMWF (only) as a synoptic-scale model. If I need a second opinion for some reason, I use the Canadian or UKMET. As that point, I make my forecast. Then, I consult three CAMS (HRRR, FV-3 and NAM 3-km). If they agree, good to go. If not, I review. Really, that's all there is to it.

I was corresponding yesterday with a university PhD in meteorology who teaches synoptics and we both agree that too many meteorologists these days are making the art and science of forecasting and storm warnings too complex.
 

Attachments

  • Screenshot 2024-03-19 at 10.49.14 AM.png
    Screenshot 2024-03-19 at 10.49.14 AM.png
    136.6 KB · Views: 9
completely agree with your observations. I was looking recently at the WoF's and its certainly interesting you brought up such a quick increase in CAMS over the past 5years even. I guess from my vantage of 25yrs experience in Operational Forecasting, and having VIV'd plenty of models, this aspect of confidence, skill, made me think about the ways in which we narrate confidence and prediction over time scales, moving from CAMS to Deterministic , to short to long range ensembles to SuB Ex, to straight climo.

Also agree, that while there is so much to emulate in the atmosphere through models and simulation. Generic pattern recognition is so valuable and kind of lost on those who are looking for those hyper exact details.

Models : Garbage in is Garbage out.
 
Generic pattern recognition is so valuable

Amen, It is vital not only to forecasting but to storm warnings, also. Yet, no one teaches it anymore.

One other thought: when NMC went to the PE model, it was a big step forward (it was quite good) and it went to 72 hours. And, we made skillful forecasts from it (check the stats). In the entire USA, there were three models -- period.

There was/is no need for literally dozens of flavors of models to make three-day forecasts. In some ways, we've become model-addicts. We can't stop ourselves.

Absolutely true story: At WeatherData, I had a brand new meteorologist from Michigan. The LFM model, which was quite good, showed a stationary front near the KS-OK border and a strong LLJ overnight. There was a 70% RH surface-500mb blob right over Wichita. I forecast "thunderstorms, some severe" overnight. Took maybe 15 minutes.

Because I was training him, I let him do his own thing (it was his first week) as he was training. He was buried in the models for ~2 hours. His forecast? "Clear." So, I sat him down and explained why I didn't think it was the correct forecast. He was upset and told me that my forecast was probably wrong because I hadn't looked at all of the models, which was true. He was absolutely adamant.

What happened? Tornado warning for Wichita ~4:30am. Then softball-sized hail with what at the time was the 11th most costly hailstorm in U.S. history.

We really can make this way too complicated. If something works, stay with it. If there is a shiny new thing, give it a test drive to see if it is truly a step up. But, never be afraid to stay with what works.
 
Absolutely true story: At WeatherData, I had a brand new meteorologist from Michigan. The LFM model, which was quite good, showed a stationary front near the KS-OK border and a strong LLJ overnight. There was a 70% RH surface-500mb blob right over Wichita. I forecast "thunderstorms, some severe" overnight. Took maybe 15 minutes.

Because I was training him, I let him do his own thing (it was his first week) as he was training. He was buried in the models for ~2 hours. His forecast? "Clear." So, I sat him down and explained why I didn't think it was the correct forecast. He was upset and told me that my forecast was probably wrong because I hadn't looked at all of the models, which was true. He was absolutely adamant.

What happened? Tornado warning for Wichita ~4:30am. Then softball-sized hail with what at the time was the 11th most costly hailstorm in U.S. history.

We really can make this way too complicated. If something works, stay with it. If there is a shiny new thing, give it a test drive to see if it is truly a step up. But, never be afraid to stay with what works.
Oh, this is not just you or your story. I believe this to be happening everywhere. "It", whatever "it" is, seems to be trying to replace the person with AI for the past 8-10yrs now?. There seems to be some belief that we can back end "code-out" all of the main parameters, remove the human, so we can ultimately reduce the labor force. That seems to be a strategic view, which implements policy budgets down to curriculum.

I haven't seen what Curriculum looks like these days so, I can't exactly speak to it and how that manifests itself into people's problem solving, big picture understanding and methodology to create a forecast. Instructors teach, and sometimes will be arbiters of the strategic vision. but I think you are experiencing what I have as well. The old school methods of teaching patterns, are being replaced by, "trust the model", but we have 8 to use, and 8 have different iterations of outcomes, 8 different biases, so you get stuck!, its literally "analysis paralysis.", whereas old school pattern people can define it in seconds and work out some of the details later, specifically for convective, or winter. which is where in my view, humans will always be in the loop.

I see the current generation of folks acting in much the same way as you described.
 
Last edited:
Will be arm chair chasing the possible event(s) this weekend into next week as it's too early for me to head out and I have other obligations. I'm also not completely sold on adequate RH return yet. To the point, it WILL be very interesting to observe how the strong W/SW winds react with the bone dry dust in the western parts of NM and Texas, including the burn scars. In past drought years, we've seen these early-season events plagued by very low visibility behind (of course) and quite a distance east of the dryline.dry-city.png
 
There continues to be moisture issues ahead of this weekends system. Despite a nice trough ejection, the Gulf low and a second reinforcing cold front look to really limit the window for sustained return flow. Multiple ensemble solutions (EPS pictured) have median dewpoints only in the upper 40s to low 50s F. Even then, the quality of those 50s is probably poor. Things get a bit more interesting into the new week across the Southeast but chaseability looks low.
1711034112241.png

Looking ahead the overall pattern remains amplified. We may be able to squeeze an event or two out but this early in the return flow year the shorter wave-length systems tend to limit moisture. Should have a better idea of the chasing situation heading into early April in the coming days.

1711034871555.png
 
Looking at FEB vs. MAR Forecasts, looks like the Prob trend of AMJ(Neutral) MJJ(Neutral and La Nina) bumped up slightly, while JJA La Nina Prob increased 5 or 6%, so maybe a slightly quicker transition period? but an increased overall prob confidence in La Nina by July/Aug.

1711457968236.png
1711458175715.png
 
Last edited:
Looking at FEB vs. MAR Forecasts, looks like the Prob trend of AMJ(Neutral) MJJ(Neutral and La Nina) bumped up slightly, while JJA La Nina Prob dropped 5 or 6%, so maybe a slightly slower transition period? but an increased overall prob confidence in La Nina by July/Aug.

View attachment 24717
View attachment 24718
Thanks for posting this Jason, although forgive me but it looks to me that all 3-month periods appear to have an increased chance of La Nina in the March forecasts vs February.
 
Really quick, I wanted to share a useful tool I've been using with some of the sub seasonal stuff. This jet phase diagram from U Albany gives us some hints about the overall state of the north Pacific Jet which can be quite influential for the upper air pattern over the western and central CONUS. (Link to paper about the NPJ Diagram)
1711464022849.png

This is a GEFS Based phase diagram based on 2x EOFs (Empirical Orthogonal Functions) that help detrend data and allow it to be sorted into similar behaving regimes. Think of this like the phase diagrams with the MJO. Like the work Dr. Gensini from NIU has done on AAM and the MJO related to severe, this tool is another way to visualize some of those processes. Jet extensions (unusually strong westerly components of pacific jets) help to build large-scale positive angular momentum anomalies in the northern Hemisphere. Think of that +AAM state as potential energy in the jet stream. Once it reaches a certain threshold that system becomes unstable and can "break down". Doing so lowers the AAM state and can result in a wavier upper-air pattern over the western and central US. Those patterns have been historically linked to more active stretches of severe weather potential during the spring.

Another important aspect of these plots is the poleward/equatorward displacement. like the names suggests, this tells you the latitude of the relative anomaly. Equatorward being displaced south and vice versa. Research has shown that when the AAM state begins to fall (Jet Retraction phase) and it is displaced south (equatorward shift) Medium-range model guidance errors become larger and predictability of 500 mb heights, MSLP and precipitation forecasts over the CONUS decreases. (Link to paper) If you've been watching the medium and long range guidance closely, you may have noticed more flip flopping than normal. We appear to be entering a period of lower predictability heading into April.
 
@adlyons - because I can't link to the paper at the moment for some reason, I am behind a firewall that won't let me go to it. Where is the centroid of PC1/PC2 diagram geographically?
 
I did a quick scan and found this....

"A traditional EOF analysis (Wilks 2011, chapter 12) is subsequently performed on the 250-hPa zonal wind anomaly data2 within a horizontal
domain bounded in latitude from 10 to 80N and in longitude from 100E to 120W in order to identify the two leading modes of NPJ variability. This horizontal domain is chosen to encompass the North Pacific basin and to match the domain employed by Griffin and Martin (2017)."

Not sure if that helps.
 
Thanks for that Mark, I think, (mistakenly perhaps), when I was looking at the diagram, the grid could be attached to a particular LAT/LONG area within that 10-80 & 100E-120W. for example, if PC1 = -1 & PC2 = -1 (than insert that -1 and -1 = (insert LAT/LONG) and I can look on a map and see that grid area. I guess I just wasn't sure if the X and Y's on that diagram were tied to the map on the right.
 
Gotcha.... no worries. I was gonna read the paper in full if I had some free time at work tonight, so I just glanced it over for an obvious answer. I'll try to get back to asap if no one else has posted. Looks like fascinating find as far as teleconnections go. A lot of it is over my head at the moment, but I enjoy the learning process. Thanks for posting that @adlyons!
 
@adlyons - hey just fyi, my "conversations" chat thing will not allow me to type back? I just noticed your message from Feb lol.. didn't even know it was there until today and when I try to respond, it doesn't allow me to send the message. so sorry about that!. I didn't wanna post this here, but I will until you see it, then if it's not too late, I'll erase it.
 
@adlyons - hey just fyi, my "conversations" chat thing will not allow me to type back? I just noticed your message from Feb lol.. didn't even know it was there until today and when I try to respond, it doesn't allow me to send the message. so sorry about that!. I didn't wanna post this here, but I will until you see it, then if it's not too late, I'll erase it.
No worries man!
 
Just a quick update as we are now into April. The ruf sum page has 66 tornado reports for the month of March. The OH valley stands out quite a bit for the event on the 14-15th. We will need to wait a few months for the official storm data to come in but so far we are below average on reports.
1712171786967.png
 
Back
Top