John Farley
Supporter
I thought about posting this in the "Too much lead time?" thread but I think it is of broader interest than the specific issues addressed in that thread, so I will start a new one.
On another board, I just got word of a new study on warning response. You can view the entire journal article at:
http://www.nwas.org/ej/pdf/2009-EJ7.pdf
Here is a summary of the study's findings by the author:
"For all warnings covering numerous events in study, the average reception score was 88.5% (very high, likely signaling the bias). The average interpretation score was 63.5% and “appropriate†response score was 37% (combined interpretation/response score around 23%). Thus, for the study, the WSR score was 100x0.885x0.635x0.37=0.21, or 21%. This was roughly double what was hypothesized…but still not necessarily a high score (average warning not successful from reception to response for nearly 80% of the warned population). Interestingly, given the bias, while reception was substantially higher than expected, the combination of interpretation and response (about 23%) was precisely where it was hypothesized to be. This raises an interesting and disturbing question. Had we been able to acquire a more objective, random sampling of the general public, would we not only have seen a reduction in reception closer to what was hypothesized, but also a reduction in interpretation/response below what was hypothesized?
"Preliminary results show no meaningful impact of age, education, area of the country, pre-event publicity (e.g. prior day event, watch issuance, SPC day1 risk category) on WSR score. Some impact is seen from warning type, with WSR scores double for TORs compared to SVRs (the main impact being appropriate response which was double for TORs compared to SVRs…this is not necessarily surprising given how TORs are treated differently than SVRs by people/media). Some impact is also seen from one’s view of strike probability (the chances of one’s location being struck by high-end, life-threatening conditions). The WSR score was 28% who viewed strike probability as being 50% or greater, while those viewing the probability as less than 25% yielded a WSR score of just 18%. Also, those who had prior meteorology or storm spotter training yielded a score of 26-28%, while those not having any of that training had a WSR score of 20%."
Note that this study applies to many different types of warnings, not just tornado warnings - probably another reason not to put in the aforementioned thread. But do note that the rate of appropriate response was higher for TOR warnings than for SVR warnings, as I would expect to be the case.
On another board, I just got word of a new study on warning response. You can view the entire journal article at:
http://www.nwas.org/ej/pdf/2009-EJ7.pdf
Here is a summary of the study's findings by the author:
"For all warnings covering numerous events in study, the average reception score was 88.5% (very high, likely signaling the bias). The average interpretation score was 63.5% and “appropriate†response score was 37% (combined interpretation/response score around 23%). Thus, for the study, the WSR score was 100x0.885x0.635x0.37=0.21, or 21%. This was roughly double what was hypothesized…but still not necessarily a high score (average warning not successful from reception to response for nearly 80% of the warned population). Interestingly, given the bias, while reception was substantially higher than expected, the combination of interpretation and response (about 23%) was precisely where it was hypothesized to be. This raises an interesting and disturbing question. Had we been able to acquire a more objective, random sampling of the general public, would we not only have seen a reduction in reception closer to what was hypothesized, but also a reduction in interpretation/response below what was hypothesized?
"Preliminary results show no meaningful impact of age, education, area of the country, pre-event publicity (e.g. prior day event, watch issuance, SPC day1 risk category) on WSR score. Some impact is seen from warning type, with WSR scores double for TORs compared to SVRs (the main impact being appropriate response which was double for TORs compared to SVRs…this is not necessarily surprising given how TORs are treated differently than SVRs by people/media). Some impact is also seen from one’s view of strike probability (the chances of one’s location being struck by high-end, life-threatening conditions). The WSR score was 28% who viewed strike probability as being 50% or greater, while those viewing the probability as less than 25% yielded a WSR score of just 18%. Also, those who had prior meteorology or storm spotter training yielded a score of 26-28%, while those not having any of that training had a WSR score of 20%."
Note that this study applies to many different types of warnings, not just tornado warnings - probably another reason not to put in the aforementioned thread. But do note that the rate of appropriate response was higher for TOR warnings than for SVR warnings, as I would expect to be the case.