• While Stormtrack has discontinued its hosting of SpotterNetwork support on the forums, keep in mind that support for SpotterNetwork issues is available by emailing [email protected].

Was SPC actually that far off on 4/26/2016?

In my opinion, this methodology of verifying SPC convective outlooks makes a whole lot of sense. It is objectively based and easy to calculate and evaluate. Unfortunately, if it was used to verify many past outlooks, you'd probably find a ridiculous amount of underforecasting of probabilities for wind and hail (maybe not so much for tornadoes, though, but I have a great example coming up). Thus, this method seems rather incompatible with the context of convective outlooks as they have been issued in the past.

I think the best example of what I mean by underforecasting comes from 27 April 2011. To refresh everyone's mind, here was the tornado probability forecast at the 20Z outlook (after the event had already started, btw), and subsequent verification:
View attachment 13208

View attachment 13209

They weren't publishing the areas or populations enclosed in outlook categories back then, so I can't calculate the area that "should have" been impacted by a tornado within the 45% area. However, the verification plot says a lot about the appoximate areal coverage of tornadoes in northern Alabama that day, and even the size of the dots gives some indication of coverage with the 25 mile buffer around reports. Does it look like 45% of the area in that contour was near a tornado report? It sure as hell doesn't to me. In fact, it looks a lot closer to 100% (sure, you can argue me down to 90% or even maybe 80%).

Look back on a number of previous events. I'll bet you find in a lot of cases (even when accounting for overlap of buffers around reports located close to each other) that the forecast probability is underdone (assuming the area of a given probability contour isn't excessively large, in which case the area might be close, but that also means there was a significant false alarm area somewhere else).

In summary, your method might have justified a 10% tornado contour mathematically, but I'm pretty sure if you look back on other days with 10% tornado areas you'll find a lot more coverage of tornado reports within those areas. Again, you can offset the fraction of area covered by reports with buffers by just expanding the area, but that implies some sort of purposeful overforecasting of threat area, which is gaming the system, and really goes against Allan Murphy's philosophy on the goodness of forecasting.

---------------------------------------------

Gonna use this opportunity to stray off topic and get on a soapbox. I don't really agree with 1) risk areas still getting named (they're confusing and there's a 1-1 mapping between named areas and probability contours) and 2) only certain probability contours being available. I know the reason is politics/bureaucracy, but it seems antiquated and unscientific to me. It seems like such policies constrain forecasters, which would cause stress and political backfire if a certain probability threshold is or isn't included. I know this solution is already implied, but I wish SPC would make their convective outlook maps look more continuous than they do. And I think they should be allowed to insert whatever damn threshold they want (okay, I'll settle with sticking to every 5%).



Agree, report coverage is one aspect that influence overall risk but there is more to it. For example, you can have 50% probability of 100% coverage, which would result in either underforecasting or overforecating and impossible to get right. This makes impossible to objectively evaluate the forecast for a single event. You can however take and ensemble and see if a forecaster has a statistical bias. I would like to add that the outlook discution for Tuesday was important. It was clear to me while reading the text that there was a significant chance for the event to not unfold as predicted, and this should be taken into account when evaluating the forecast.
 
Back
Top