"That is a _horrible_ policy to have on so many levels it actually makes me cringe to hear you say it. "
I am sorry that you feel this way Tyler.
A bad report could cost the public millions, if the sirens are
activated, in lost wages, fire Dept's moving vehicles, police, hospitals, private
industry etc.. You ever see what happens in a nursing home when a warning is issued?
What happens in hospitals? Moving these people around for a bad tornado call is not
real good. I have seen folks being moved around in nursing homes during tornado
warnings and would hate like hell to have some clown cause an unneeded death!
Let's be precise with our language please. There are only a few types of reports...
1. malicious = intended to cause action for no reason
2. bad = not completely accurate, open to interpretation, but a "true" report non the less
3. good = easy to understand, conveys proper information
4. verified/confirmed = proven by a third party or technology
verified/confirmed is not possible with todays technology. It just isn't. So I'm going to ignore this one. Anyone asking for this on a real-time basis doesn't understand severe weather and frankly...is a moron.
I think we all try and stop malicious reports. In fact, I suspect what you mean by "bad" is actually "malicious". But correct me if I'm wrong. There is _NOTHING_ you, me, the spotter network or any government agency can do to stop malicious reports. So long as anyone solicits input from an open community there will always be jerks. We can reduce the number and potentially the impact, but we can not stop it.
Based on what/how you described your actions when you are attempting to "vet" a report, you are actually attempting to stop malicious activity. There is absolutely _NO_ way you can determine if it is a "bad" report, a "good" report or a "verified" report. It's not possible. You are not there. To presume to say you can is miss leading.
The fact that we want no report over a bad report is just common sense. I just can't see it any other way.
If you mean "malicious", then I will agree. However, if you really mean "bad" (using my definition) we are absolutely in disagreement. I'll take 1000 "bad" reports if it means I get 1 "good" tornado report.
Sort of like saying I will take the car that doesn't work right over the one that does.
Again, assuming my "bad" definition, this is an inaccurate analogy because you presume to be able to tell when the car is working or not. Staying with your analogy....what I'm saying is..
I will take a car that is _reported_ to be in working order by a _trusted_ third party over having no report because I can't actually see the car myself. How you determine the "trust" level of the person is the "vetting" process. (see below)
Crying wolf does not work.
Except in the case of malicious intent, there is _no_ way you can determine when "wolf" is being cried. Otherwise the NWS wouldn't need spotters. Actually, you may not even be able to determine when "wolf" is being cried maliciously in a given situation. So attempting to "vet" a reporter in the middle of severe weather is probably a loosing battle anyway. That's the nature of soliciting input from the unwashed masses. There will always be some nut job who thinks causing a hospital to empty in the middle of thunderstorm is some how cool. And he/she will do anything you put in front of them to cause it. If he/she times their report perfectly...you will have _no_ way of knowing if he is right or wrong. In that situation the proper response, as much as it sucks, is to error on the side of caution and evacuate. About the only thing you can do then is stop blue sky tornado reports.
I am talking about vetting a report not validating it, please, there is a big difference between the two.
Again, let's be precise with our terms.
Vetting = determining the trustworthiness of the _reporter_...NOT THE REPORT!
Validating = determining the certainty of the report
On a small community scale, like most SKYWARN organizations..."Vetting" can be successfully done by personally knowing the person sending in the report. However, if you get a report from an "unknown" person there is _NOTHING_ you can do to "vet" the report in real time. That is why a known and common training standard is so essential. It provides the common "trust" with which you can "vet" the report without actually knowing the person. The fact that you attempt to "vet" a report by looking at radar, trying to determine if the person sounds credible, is a _horrible_ way of attempting to "vet" someone. It is prone to error and will prevent valid and time critical information from reaching the intended target (NWS). This is no different from the many reports we get of NWS employees not reacting to a random phone call about a tornado. They are unable to "vet" the person on the other end. "vetting" can not be done in real-time. It must be done _prior_ to the first contact. (eg: common training and a national system)
Time to leave this and let the performance and reputation of our group
speak for itself. You can request those numbers from KMKX if you wish.
Maybe this is why we are one of the largest and fastest growing
groups in the MidWest.
Nobody is claiming you guys arn't running your show with professionalism and quality (except maybe your "vetting" process). I have no idea how you are doing it, and if I did, I'm not the expert on running a SKYWARN organization. If in fact our definition of a "bad report" is not the same, then I will say I think your "goal" is wrong...but how you reach that goal could be perfect. I'd still think you are wrong about "vetting/validating" though
If it's anything like Ripley County Indiana a few years back, they do an _awesome_ job with nothing but ham radios and team work. They could be more effective if they used Spotter Network, and it was for them that I originally built Spotter Network.
I still hear NWS employees saying "I'd rather the person call me" to submit a report (I was sitting 2 feet from the LOT NWS rep at a recent conference when we were both on the panel for open question time when he said this). I respect the guy and consider him my friend, but he's dead wrong on this. That only works if you assume you have only 5 spotters in your area. The first time 84 people are on hold all reporting tornados, hail and other information and that one tornado report you needed didn't happen because you were trying to "vet" a pea sized hail reporter...that same NWS employee will scream about information overload. This is _exactly_ why those of us thinking ahead are trying to push, kick, drag the severe weather emergency community (NWS, EMA, etc) into a scalable form of information sharing. One to one voice communication is not it. (btw...this is also why the "twitter" reporting experiment will fail..it has no mechanism for "vetting" reporters)
Think of this like the military. If some PFC from the 101st airborne calls the major in charge of artillery and asks for a strike because he's being over run, does that major try and "vet" who is calling and if he really should fire those rounds? Heck no! He fires the rounds. And why is that? Because the military has a community built on trust. Not trust of the person, but trust of the community. That trust is gained through shared training and experience of the individuals that make up the community.
We need to stop trying to figure out if we can trust the person on the other end of the line. We need to figure out what it's going to take to allow us to trust the community and make it happen. I believe (as do many) that a common minimal training standard is required as the first step. Only then can we stop saying "and who are you?" before we believe the report. We want people to trust the Spotter Network...not the individual members within it. The next step in that reality is a common training requirement. Thus the announcement I made to start this thread.
-Tyler