So, no, I don't think two hour WoF tornado warnings are a good idea.
What's the different between a Warning and a Watch? Typically (and simplistically), we use the term warning (in the convective sense) when a hazard is imminent (and use a <1 hour time scale) and a watch means a hazard is possible (and use a <8 hour time scale). What happens when we starting developing products in between the watch and warning time scale? One that begins to blend the two? What do we call it?
Let's say that I value a missed detection considerably more than a false detection. In other words, I'd rather a high FAR than a low POD. In this case, I might take the same action I would for warning (based on current paradigm) whenever my threat increased to or exceeded 25%. What if I value false detections more than missed detections? In this case I might not take action until my threat increased to 75%.
This gives rise to the question, "What is the threat threshold at which the warning forecaster will issue a warning?" Is it 25%, 50%, 75%? We don't know because it varies for each forecaster. WoF aims to remove this ambiguity by making the underlying probabilities available.
Consider this example:
An end user wants to be notified anytime his threat of a tornado within some long time period (say 6 hours) is greater than 10% and he wants to be notified again if the probability of a tornado is greater than 50% within a shorter time period (say 1 hour).
With the probabilities generated by WoF, the end user can be notified the moment his 6 hour probability of a tornado reached 10%. This would be his short term tornado outlook. As long as the 10% threshold is exceeded, he knows he's at a risk of tornadoes -- the user begins to maintain weather awareness. He also gets notified whenever the 3 hour tornado probabilities exceed 25%. He can consider this his tornado watch. As long as his probability of a tornado is greater than 25%, he is in a tornado watch. When the probability of a tornado within an hour exceeds his 50% threshold, he can consider this his personal tornado warning. As long as his probability remains above 50%, he's in a tornado warning.
They key is that each user can use his or her thresholds (both in time and threat) to determine what constitutes a watch or warning, instead of relying on someone else's thresholds that may not suit the end user's need. For users who don't have any knowledge of what their thresholds need to be, private weather companies can help develop the appropriate thresholds. For the general public, they can just rely on the default thresholds set by the local NWS office for that event. In this example, the current warning structure remains in place, but end users can alter the default settings to best reflect their needs.
WoF should be considered as a continuous flow of information from short-term outlook to warning scales.
To close, the NWS really needs to work on the false alarm problem but keep the current warning structure in place.
WoF is being developed by OAR, not the NWS. NSSL (which is a part of OAR) and OAR are supposed to push the edge of capabilities and develop the next generation products and WoF is this next generation of convective forecasting. NWS already assigns probabilities to convective watches. Shouldn't NSSL proceed to work on making these probabilities as scientifically robust as possible? In any event, even if NSSL/OAR develop these technologies, it's up to the NWS to implement them. But regardless of implementation, it doesn't mean that NSSL/OAR shouldn't work toward this goal. A lot of important things look to come out of this research that will benefit a lot more areas than just convective warnings.