Nadocast

Jeff Duda

site owner, PhD
Staff member
Site owner
Supporter
Joined
Oct 7, 2008
Messages
3,723
Location
Denver, CO
I don't know anything about who is behind this what-I-presume-is-a research experiment in forecasting tornado outlooks (compared to SPC), but this Twitter account called Nadocast showed up at least a few months ago and has posted 00Z and 14Z areal tornado forecast probabilistic outlooks daily during the summer and has also been open about posting historical forecasts and verification results.

From what I can gather, the makers of Nadocast are creating calibrated probabilistic grids of tornado probability from HREF products, likely using some degree of recent performance/AI to modify the weights from individual members, as well as trying a variety of composite products (i.e., SCP, STP, etc) to hone in on better forecasts.

Some of their forecasts are pretty impressive compared to operational products. Let's look at a few:

Image comparison format (per case): SPC 06Z Day 1 (left), 00Z nadocast for next day (right)

June 26th 2021
spc_day_1_20210626_t06z.pngnadocast_20210626_t00z.png
Nadocast better captured the SW-NE orientation of the line of tornado reports across IL and cut down on false alarm area in IA

June 25th 2021
spc_day_1_20210625_t06z.pngnadocast_20210625_t00z.png
Nadocast accurately put a 5% area in the region where a cluster of tornado reports occurred in IL/IN and cut away at some false alarm area on the central Plains without missing the tornado report east of Amarillo.

June 24th 2021
spc_day_1_20210624_t06z.pngnadocast_20210624_t00z.png
Nadocast slightly better on the placement of the 5% area and cut away at false alarm area in IA/MN/WI (although added a small false 2% area to the west)

June 20th 2021
spc_day_1_20210620_t06z.pngnadocast_20210620_t00z.png
Subjective judgment here that Nadocast offered an improvement in the Midwest by painting higher probs along the axis where tornado reports occurred, but definitely an improvement in cutting down probs in the southeast US (GA/SC/NC) where no tornado reports occurred.

In the morning update, the discrepancy in the SE US became more prominent, though:

spc_day_1_20210620_t13z.pngnadocast_20210620_t10z.png
Whereas the regions in the SPC outlook increased in size, those in Nadocast decreased, providing a lower FAR for Nadocast.

Many more examples can be found in their archive, which is admittedly difficult to follow, but it seems like a small operation, so we're probably lucky to be seeing anything at all: Nadocast compared to SPC
 
I've also seen the Nadocast pop up in my timeline. It looked to me at a glance to be similar to the NCAR Neural Network forecast for tornadoes at least the few times I've compared results. I spot checked some of the above dates just now and that holds up, but where they differ Nadocast looks to do better.
 
That’s pretty interesting, I wasn’t aware of this…. I was interested to compare the forecasts for May 26, 2021, when so many chasers (including me) were drawn to SW KS but all the action was in NW KS / SW NEB and the TX PH.
Top picture is SPC and bottom is Nadocast. Nadocast arguably did better in downplaying the southern threat, although the two tornados in the northern TX PH are right around, or just outside, the skinniest outlook area.

Looks like the project runs only for the season; would love to have seen how Nadocast did with the unusual event in my area (southeastern PA / NJ) on 7/29/21 and with the outbreak associated with Ida on 09/01/21.

62253355-0802-466D-A58F-814B555DCC94.png744F575B-8829-4A3E-BBF7-7A07B66416F2.png
 
I don't know anything about who is behind this what-I-presume-is-a research experiment ...
From what I could find, it looks like thus was done by someone named Brian Hempel ... GitHub - brianhempel/nadocast: Tornado probabilities via post-processing weather model outputs with machine learning. ... Brian Hempel . His uchicago.edu page says " 6th year Ph.D. student in the PL group at UChicago (graduation target: June 2021). Can we augment programming with direct manipulation interactions? ". nadocast is shown as one of his "Personal Projects".
 
graduation target: June 2021
Thanks for reminding me to update that!

Looks like the project runs only for the season; would love to have seen how Nadocast did with the unusual event in my area (southeastern PA / NJ) on 7/29/21 and with the outbreak associated with Ida on 09/01/21.
You can find old Nadocast operational forecasts with Twitter's advanced search. The maps on test.nadocast.com are reforecasts from the final, calibrated models. Not all operational forecasts this year were calibrated, but forecasts this year should still tell essentially the same story. Eventually there will be a website.
likely using some degree of recent performance/AI to modify the weights from individual members, as well as trying a variety of composite products (i.e., SCP, STP, etc) to hone in on better forecasts

It's gradient boosted decision trees trained on thousands of features over as much data as I could get my hands on. HREF, SREF, RAP, and HRRR.
 
Thanks for reminding me to update that!


You can find old Nadocast operational forecasts with Twitter's advanced search. The maps on test.nadocast.com are reforecasts from the final, calibrated models. Not all operational forecasts this year were calibrated, but forecasts this year should still tell essentially the same story. Eventually there will be a website.


It's gradient boosted decision trees trained on thousands of features over as much data as I could get my hands on. HREF, SREF, RAP, and HRRR.
Welcome Brian, Ive interacted with you a few times on Twitter. Glad to see you show up here. I wanted to pass along something you had asked about in one of our previous discussions. SPC maintains a test version of the HREF for operation use before public release. We have something similar to what you are doing with machine learning probs in testing with the HREF. It's being done by a CIMMS SPC researcher named Eric Loken. I believe he is also using a RF and CNNs. Here is one of his more recent papers on the calibrated guidance he is producing for us internally. We hope by early next spring to be releasing the HREF V3 with this new guidance. Curious to see how your system evolves. I've found myself looking at it more and more as its become better calibrated.
 
Welcome Brian, Ive interacted with you a few times on Twitter. Glad to see you show up here. I wanted to pass along something you had asked about in one of our previous discussions. SPC maintains a test version of the HREF for operation use before public release. We have something similar to what you are doing with machine learning probs in testing with the HREF. It's being done by a CIMMS SPC researcher named Eric Loken. I believe he is also using a RF and CNNs. Here is one of his more recent papers on the calibrated guidance he is producing for us internally. We hope by early next spring to be releasing the HREF V3 with this new guidance. Curious to see how your system evolves. I've found myself looking at it more and more as its become better calibrated.

Only 14 predictors for the random forests? Oh wow! If I had time, I would like to trim down the number of predictors that Nadocast uses so it's in the hundreds rather than >10,000. But, during hyperparameter optimization, using more features tends to work as well or better than using fewer.

If one can provide enough data, an appropriately-designed CNN should easily best Nadocast or any human. But "enough data" and "appropriately-designed" are significant caveats. The only inkling I had on the later caveat was to start with U-Nets, which have been successfully applied to satellite imagery. U-Net: Convolutional Networks for Biomedical Image Segmentation
 
If one can provide enough data, an appropriately-designed CNN should easily best Nadocast or any human. But "enough data" and "appropriately-designed" are significant caveats.

I think significant is a bit of an understatement there :p But im glad to see you continuing to work on nado cast and its derivatives. Like I said, we do see your guidance and I think post processing of NWP through machine/deep learning is going to bring about significant improvements in forecasting. Prior to this it was grid and temporal resolution changes now its about lowering the signal to noise ratio.
 
Back
Top