• While Stormtrack has discontinued its hosting of SpotterNetwork support on the forums, keep in mind that support for SpotterNetwork issues is available by emailing [email protected].

El Reno Oklahoma tornado downgraded to EF3

Just because one EF5 happens in the city and another in open fields should not make them unequal when scientifically evaluated.

I wish we could make this into a billboard and stick it right outside the offices of whoever made this decision. Because a tornado didn't cause EF5 damage doesn't mean that it wasn't capable of causing it, and in this case we have evidence for this that's about as good as you'll ever get. If the ultimate goal is to accurately represent risk - in this case, how often a given area experiences a tornado of a given intensity - how does it help to knowingly and willingly underestimate tornadoes? I firmly believe we're already underestimating the occurrence of intense tornadoes, possibly by a significant margin, and now we're going to reject data that could help us change that? Trying to understand this decision makes my head hurt.
 
It seems to me this has huge advantages and no downside.

The downside is under the F scale, stronger winds were attributed to tornados than were actually necessary to cause damage (due to construction weaknesses etc) That was the main reason for switching to the EF scale--the realization that you didn't need 150 mph winds to take out some houses. This was extensively demonstrated.

This is correct to a limited extent. However, just because a storm could take a home with a lesser wind speed doesn't mean the wind speed wasn't higher, especially in a less populated area.

My concern, as I hope I have expressed, is not with the lower end of the F scale but with the upper end and the very poor guidance to design professionals.
 
Jacob, while I agree the original scale was not perfect, the evidence seems to indicate the original was much better than the EF. The original went up to 319 mph. The original explicitly allowed for measured winds. See: http://meteorologicalmusings.blogspot.com/2013/08/extremely-odd-decision-from-national.html

Or, am I missing your point. If so, please clarify. I'd like to better understand.

Oh, I agree completely that the original made more sense than the system they're using now...if they had evidence that some of the wind speed estimates from damage surveys older events were inaccurate, why not just adjust the estimated wind speed criteria but retain the allowance for measured winds?

What I really wish would happen, if a defined scale along the lines of Fujita's continues to remain a part of the process going forward, is that they would take all of the good elements of the original scale (upper limit for an F5, presence of an F6, allowance for measured wind speeds), combine them with the wind speed / damage adjustments of the EF-scale, come up with standards for how measured wind speeds are incorporated into ratings, and produce a more sensible solution.

My comment about the 150 mph / 250 mph was primarily a concern about how the information is being received and perceived by those it may matter most to...for example, right now there are some storm shelter companies advertising EF-5 capable shelters...what is EF-5 proof? Does the consumer really know? Perhaps the most dangerous part of the EF-scale is the ambiguity above 200 mph...hypothetically speaking, the prospect of a family buying a shelter tested to, say, 250 mph, thinking it'll protect them in an EF-5, when measured wind speeds around 50 mph higher have been recorded in not one, but two different tornadoes in the past 14 years, is horrifying.

So if there's going to be a scale, go back to something with upper boundaries. There is no way to absolutely eliminate ambiguity, but I would rather that ambiguity existed above where winds have been measured with high precision.
 
How is the scale flawed?

The one primary flaw of the original scale, according to those who came up with the new scale, is in the relationship between wind speeds and the damage they cause, and how construction quality affects this relationship. I don't believe that Fujita's scale was severely flawed by any means...in fact, one could argue that it was extremely effective, considering when it was developed and how much less the scientific community knew about tornadoes at that time. I'm just saying it wasn't perfect.
 
It strikes me that this whole rating business is so bound by policy that it's missing an obvious point: the scale's purpose is to determine tornado wind speed. When Ted Fujita conceived the original F scale, his primary concern wasn't with the damage; it was with the wind speed that could be extrapolated from the damage. That's also the goal of the enhanced version. (It is, isn't it?) I mean, what's the point in simply wading through the remains of a neighborhood, saying, "Yep, that there house sure got blowed away!" and then assigning a rating? What does that accomplish? Isn't the NWS and isn't research looking for more than that? Conversely, what is accomplished by scouring the path of a 2.6-mile-wide tornado in open country mostly devoid of DIs, then shrugging one's shoulder's and saying, "Well, dang, we sure thought it was a violent tornado, but it looks like it was just an EF-3," when the radar evidence is screaming that the wind speeds did in fact easily exceed the EF-5 threshhold?

Instead of starting with policy and making it the almighty determinant, how about starting with the objective--measuring tornado wind speeds as accurately as possible--and then working from there. Forget the damage-rating-only concept; it has been outdated by a technology that has augmented and improved the options, and to force that older mold on research today is stupid. The goal is to provide meaningful assessments of wind speeds, not preserve old wineskins. Given that objective, why on earth would so powerful a tool as RaXPol be discarded just because it's not a "consistent" form of measurement? Maybe not, but it's an accurate one, the most accurate we've got so far. There's nothing admirable about consistency if it's consistently undependable, subject to all kinds of limitations such as availability of DIs, and prone to a good deal of subjectivity.

Again, if the goal is to determine tornado wind speeds, then that, not policy (including whatever "consistency" is supposed to mean) should be the guiding light for shaping the rating system and its methodologies. Make it meaningful, for crying out loud.
 
Last edited by a moderator:
I'm guessing "consistency" is meant in regard to climatology. Using mobile radar data would create a bias toward more strong/violent tornadoes in the Plains as compared to other regions because that's generally where the mobile radars are used. That's true, sure. But what's more important - maintaining a consistently inconsistent climatology by sticking to flawed methodologies, or enhancing the accuracy of our ratings even if we can currently only do it in certain areas? I think it's much more valuable to know that, say, there are an average of ten tornadoes per year in the Plains that attain wind speeds of 275+ mph. That's just an example, of course, but you get the idea.

I may have said this earlier, but I think we're substantially underestimating the occurrence of strong/violent tornadoes simply by the fact that a number of them - especially in the relatively sparsely populated Plains - go unrecorded as such every year. That's never been clearer than this year, with El Reno, Bennington, Rozel and possibly the Smith County, KS tornado on 5/27 depending on what you make of Sean Casey's "175 mph before the instruments failed" claim. And that's just one year. What are the odds that a tornado strikes something? Now, what are the odds that it strikes one of the relatively few DIs that are capable of registering EF4-5 intensity? Now, what are the odds that it does so while it's at peak intensity? If we have accurate tools that can give us this information without relying on a whole string of lucky events, it ought to be used. We need comprehensive guidelines for when/how the data is used, but we don't need to reject it outright.
 
Well said, Shawn. I think we've arrived at a point where the NWS needs to embrace inconsistency in methodology for the sake of using truer, more accurate measurements when they're available. After all, the inconsistency has always existed; it just hasn't been in method but in the distribution of DIs, with few across the open plains and hundreds in urban areas. I'll trade that form of inconsistency for the kind that breaks from traditional policy and practice for the sake of assigning more accurate, realistic ratings. Will there be a discontinuity in record-keeping--i.e. how we compare old documentation with new documentation going forward? Inevitably. But is that a reason not to go forward? No.

Presumably, as new technology continues to be incorporated and the options for measuring tornado intensity expand, "consistency" will become redefined to include every possibility that is available for obtaining the most accurate assessment of tornado winds. It's a matter of establishing a "new normal," I think. At some point, it needs to happen. Why not now.

The question is simple: If we can do better, shouldn't we do better? If the answer is no, then the reason ought to be a lot more convincing than, "Because we've never done it that way."

NOTICE: The opinions expressed here are not those of the NWS, just those of a cantankerous duffer who has no connection with any government agency. Shoot, I'm just a freelance editor/writer and a jazz musician. What the heck do I know about this stuff?
 
Last edited by a moderator:
This was brought up earlier, but has anyone heard anything about the Rozel, Bennington and South Wichita tornadoes? Are they being downgraded as well? It's ridiculous enough that mobile radar data is being ignored, but it's even more ridiculous if it's being ignored only in this one instance. And what about El Reno's "record" width? Is that nullified too? I'd love to hear an explanation if they're downgrading El Reno but keeping the others as-is.
 
Methinks the main El Reno tornado could be best described as a huge EFF1/EF0 tornado with several random and very fast moving tornadoes embedded within it, some up to EF5 ratings. I am glad that we never got very close to the monster.

Frank VA

You know, upon further reflection of accounts and videos of this event, I'd be of the opinion that this description fits about as good as any. It fits with the observation of the numerous vortexes when the original tornado formed and it explains why at some points the entire structure seemed to be on the ground, yet damage assessment doesn't entirely back up observed measurements.

I'm not much more than a weather enthusiast in all of this. Does anyone else out there with more knowledge than myself wish to explore this possibility? I think part of the problem here lies not necessarily with the scale (though I certainly believe it needs an overhaul) but the difference between what was observed and the actual structure of the tornado itself.
 
You know, upon further reflection of accounts and videos of this event, I'd be of the opinion that this description fits about as good as any. It fits with the observation of the numerous vortexes when the original tornado formed and it explains why at some points the entire structure seemed to be on the ground, yet damage assessment doesn't entirely back up observed measurements.

I'm not much more than a weather enthusiast in all of this. Does anyone else out there with more knowledge than myself wish to explore this possibility? I think part of the problem here lies not necessarily with the scale (though I certainly believe it needs an overhaul) but the difference between what was observed and the actual structure of the tornado itself.

Well, we've known that all along. The structure here was complex, and the highest velocities seem to have occurred inside these very small-scale subvortices within the parent circulation. Their individual rotational velocities + translational velocities = extreme wind speeds. That's generally the case with multivortex tornadoes, and it's why you'll sometimes see streaks of extraordinarily intense damage embedded within a larger field of moderate damage. That's another reason why the "it didn't cause EF5 damage" argument doesn't hold up, IMO. Yes there were structures and/or trees scattered around the path, but if the highest velocities are occurring in very small subvortices, what are the odds they hit anything substantial dead-on? Just because they didn't doesn't mean they couldn't have caused EF5 damage if they had, but that doesn't seem to be taken into consideration.
 
Just for argument and debates sake which is fun, why can't they incorporate both wind speed and damage if both are available, or just damage if its the only available way to measure, or only radar if it's only way to measure? Perhaps this would help keep historical data intact, incorporate new technology and still cover any storm? I think Jacob had a mention of this above. I guess it all depends on what the ultimate assignment of a rating actually is supposed to measure.

Of course, if there is huge wedge that caused no damage and had no radar measurements on it then it never happened:)
 
Last edited by a moderator:
So when are they downgrading Rozel to EF2? Because OBVIOUSLY it's happening now.

Like I said a few months back, are they more concerned with "consistency" or accuracy? Let's put consistent inaccuracy over new technology that allows better ways to assign ratings. Weather, another thing politics should never influence. This topic is a Google Hangout waiting to happen.
 
This was brought up earlier, but has anyone heard anything about the Rozel, Bennington and South Wichita tornadoes? Are they being downgraded as well? It's ridiculous enough that mobile radar data is being ignored, but it's even more ridiculous if it's being ignored only in this one instance. And what about El Reno's "record" width? Is that nullified too? I'd love to hear an explanation if they're downgrading El Reno but keeping the others as-is.

Shawn -- The answer will be known soon since the May 2013 Storm Data have been finalized and will be available on the NCDC Storm Data website soon (in the coming days?)... I know that Bennington has been downgraded back to EF3 despite EF5-level winds measured by a mobile radar. I suspect that the others will be as well.

My bigger question is what about tornadoes BEFORE May 2013 that were rated based on high-resolution mobile radar data... I think specifically of June 5th, 2009, Goshen County, WY, tornado, the prime case study of VORTEX 2; it was rated, as far as I know, based on mobile radar data in the absence of any meaningful DIs in that very rural area of Wyoming. There are other historical cases as well. Will offices need to go back years to re-rate tornadoes?

I suspect the answer is "no", because I think the NWS will have guidelines in place to account for observational data (anemometers, pods, mobile radars, etc.) in the next couple of years. As such, I think some of these tornadoes may be re-upgraded again the future. We'll see, though.

It's been apparent for a while that there are many more strong-violent tornadoes in the U.S. than the current climatology suggests. As we know, given that much of the Plains is sparsely populated, the damage-based methods for EF-scale rating will underestimate the frequency of stronger tornadoes. As some evidence of this, one can read Curtis R. Alexander and J. Wurman, 2008: Updated mobile radar climatology of supercell tornado structures and dynamics, Proceedings 24th Conference on Severe Local Storms, Savannah, GA, American Meteorological Society

Mapping the DOW-observed peak ground-relative velocities to either the Fujita Scale (Fig. 7) or the Enhanced Fujita Scale (Fig. 8) shows a preferred intensity in the F/EF2 range in a more bell-shaped (although possibly skewed) distribution. This distribution is striking when compared to the much more linear (or even exponentially decaying) damage-based intensity distribution from NCDC Storm Data.

A hypothesis for this discrepancy in supercell tornado intensity distributions is the overestimate of the number of weak tornadoes (F/EF 0-1) due to a lack of damage surveys and/or damage indicators resulting in a persistent low bias to intensity estimates of strong tornadoes (F/EF 2-3) (Doswell and Burgess 1988). Violent tornadoes (F/EF 4-5) may be infrequent enough and are usually well documented to permit an accurate characterization of the upper end of the intensity distribution.


It should be noted that the Storm Data tornado intensity distribution for the DOW-sampled tornadoes appears very similar to the Storm Data tornado intensity distribution in the central and southern plains (Nebraska, Kansas, Oklahoma and Texas) in April, May and June for all reported tornadoes between 1995-2003 (not shown). This similarity would preclude a field-project sampling bias as the primary source of the discrepancy between DOW-observed and damage-based intensity distributions.
AlexanderandWurman_Fig8.png

Above: From Alexander and Wurman -- The shaded bars in the background are the ratings from Storm Data; the solid bars in the foreground are based on DoW data.
 
Last edited by a moderator:
Jeff, thanks for your valuable input. It seems that it can be said with confidence that the reality is EF-5 with outdated, and possibly irrelevant, bureaucratic policy saying EF-3. Are there any truly compelling arguments for EF-3 that we haven't heard?

Of course, it may seem to some that I have a "dog in this fight", but I really have no problem accepting EF-3 if the evidence points to that.
 
I think this has brought to a head the issue we've always faced when classifying tornadoes and estimating wind velocities, and that is: How do we do it?

The Fujita/EF scales are damage intensity scales - they're not wind velocity intensity scales in the way that the Saffir-Simpson Hurricane scale is. How did we get to the latter? Beaufort's wind scale was where it all started, initially based on the effect winds had on a ship's canvas, and latterly, when anemometers were invented, changed to specific velocities (let's gloss over the period over which 'mean' velocities are used for the sake of this!). Thus, there was a change in the way the scale was defined when new technology appeared.

Move to the hurricane scale - this was invented to give an idea of damage, but based on the wind velocities. Initially it included other effects too but from 2009 was changed to a purely wind-based scale. Hurricanes, as we know, can be classified by using satellite information to look at the cloud structure (Dvorak technique). Obviously, this was introduced in the light of new technology - satellites.

Now move to tornadoes - we now have the ability to be able to sample and fairly close range the wind fields of a tornado, rather than having to just use damage indicators from site surveys - I realise a fair amount of work went into generating the damage indicator list but here we have a way of measuring wind velocity in a scientific manner! Think about all the 'old' tornadoes which were post-rated by either damage photos or from description - these F numbers have remained, despite the dubious scientific quality of the damage estimates.

There are at least 2 ways forward:

1) Record a tornado by either the damage it causes or via remote sensing;

2) Record a tornado's damage intensity via the current method, and separately record its wind velocity if sampled.

I'm sure others can come up with a more sophisticated solution but the idea of scaling up and then back again a tornado's supposed intensity is confusing, especially for the public.
 
Back
Top