Rating Tornado Intensity Based on Moblie Radar

Thanks for the explanation.

So what about this tornado is being reviewed still?
 
Part of the research behind the F-scale (and subsequent switch to EF-scale) was trying to decipher what the actual wind speeds needed to produce each level of damage were. Scientists thought enough of the wind speed estimates to back way down on the actual numbers when the EF-scale replaced the F-scale. IMO, if they went through the trouble to research that the F-scale wind estimates were too high, it would be foolish not to incorporate measured wind speeds into the ratings, now that we have the technology to get them.

The one argument I keep seeing against using mobile radar measurements is "it's not consistent with the methods used to rate tornadoes in the past." IMO this argument is invalid. We created a damage scale based on estimated wind speeds because we had no way to measure actual wind speeds. Now that we do, should that not be a top-tier indicator when assigning ratings? "But violent tornadoes in the past that weren't sampled and didn't hit anything always got F0 ratings. That's messing with the data." Well, science is a fluid endeavor, and as we learn, things change. If those past tornadoes had been sampled, they'd be handed the rating based on actual wind speeds...but they weren't. That this is "inconsistent" with today's measured ratings is simply a circumstance. We didn't have the means to assign a rating based on wind speed measurements so we went with damage indicators and estimates....because we had no other options. Do we now ignore this new technology that allows us to better-sample actual tornado wind speeds and give more truthful ratings just so we can "remain consistent" with how we used to do it?

That seems a step backwards.
 
Unless tornadoes are being sampled with mobile radars in North Dakota, Alabama, Wisconsin, etc maybe an "asterisk" should be placed next to certain tornadoes. Just like some want done with baseball records. :)
 
The arguments for seem to be that if we have these accurate wind speeds measured, why shouldn't we use them? Well, the biggest problems are that we don't really know how accurate speeds measured at 150m are compared to the surface, and we have no rules for applying them. What if they were measured at 250m? 500m? I personally think it's a good idea to try and capture the true strength of a tornado when possible, but I'm sure some of the sticky points being argued right now resemble the following:

1. There needs to be a consistent and easily applied method to incorporating these measurements into tornado rating
2. Part of this consistent method will require deep analysis of what it means at the surface when 300mph winds are measured at 150m. Or 50m, or 500m. That last 100m or so to the ground still seems to be unknown in many ways, and is consequently an area of heavy focus.
3. There needs to be some way to denote when EF-3 damage is found, but EF-5 winds are measured, and to put an asterisk next to that entry. Even though previous StormData has tons of errors, and is affected by urban sprawl, etc. at least you knew there was a consistently applied scale being used and you knew the limitations of it.

On a related note, I assume the survey teams strive to be unbiased, but when you have something like the Moore tornado and you find such a relatively small area of EF-5 damage - I wonder if there was pressure creeping in from politicians, friends, family, peers, and internally. A lot of people really, really wanted Moore to be an EF-5 because it's unthinkable that anything but the strongest tornado could have killed so many, just like how no one wants to think a "lowly" EF-3 took the lives of Tim, Paul, and Carl. Maybe they looked harder at Moore than they would have Bowdle, Langley, or Wadena? Even the possibility is a strike against DIs and a win for measured speeds if we can figure out how to use them appropriately.
 
Last edited by a moderator:
Well, the biggest problems are that we don't really know how accurate speeds measured at 150m are compared to the surface, and we have no rules for applying them. What if they were measured at 250m? 500m?

Is that claim based on research? I'm under the understanding that winds were coordinated with TIV measurements a few years ago and deemed fairly accurate. Are you saying that's not true?
 
It's getting interesting.

http://www2.ucar.edu/atmosnews/opinion/9696/terrible-tornado

The consensus document that guided creation of the enhanced Fujita scale (see PDF) gives the green light to use radar data in this way. It states: “The technology of portable Doppler radar should also be a part of the EF Scale process, either as a direct measurement, when available, or as a means of validating the wind speeds estimated by the experts.”

I can see Doswell's point. If a tornado produces measurable EF5 winds, it's at least *capable* of EF5 damage even if it doesn't hit anything. It seems like knowing the true strength of a tornado in a given environment can be used as a basis for forecasting future events in similar environments, or at least assessing the potential threat.
 
Is that claim based on research? I'm under the understanding that winds were coordinated with TIV measurements a few years ago and deemed fairly accurate. Are you saying that's not true?

I don't know if it's true or not - I haven't seen anything counter to that assumption, and we still have research teams claiming that they need to find out what's happening at the surface for tornadoes, dropping probes in the path. "Fairly accurate" doesn't seem good enough for scientific progress, although the EF scale admittedly throws accuracy right out the window with its damage estimations.

edit: VVV haha, good enough for me.
 
Last edited by a moderator:
"Fairly accurate" was my paraphrasing of stuff I've seen from DOW and OU people. I've not seen the research myself, but if it's good enough for Howie it's good enough for me :)
 
The NWS Director sent out a memo stating that mobile Doppler data CANNOT be used in changing EF-scale ratings...

http://cadiiitalk.blogspot.com/2013/06/the-ef-scale-ratings-brouhaha.html

Okay...if this is true and wasn't JUST written in the last week or so, then there will need to be a number of changes made to some tornadoes from as far back as two years ago. I seem to recall the El Reno tornado of 24 May 2011 was rated EF5 also based on mobile Doppler radar measurements. Did this not come up then (memory isn't great, but I don't recall it happening)? Seems like quite the oversight on the part of local WFOs or a strike back from the headquarters in light of the debate this has sparked.
 
It came up and I understand there was a great deal of debate, but still, they went with EF5 eventually. How was it okay then but not now? I don't mean to throw anyone under the bus, but it's hard to imagine how this could just be coming about now after a number of tornadoes have already been upgraded and data has been publicly released.
 
Putting in one more plug for a dual rating system. Everybody stays happy, studies needing to use the data have access to whichever system fits their sampling methods. Maybe it would even reduce the mass hysteria that seems to have evolved with the fact that Central OK has been hit by 3 EF5s in 3 seasons (but only 1 if counting the "old way") compared to 1 F5 from 1983-2010 - I have had several conversations with my non-weather addict friends about this fact and their opinion that it is due to global warming etc etc. The reality of it is that the EF rating probably means very little to the practicing scientist nowadays. But it means a lot to the lay public.


The radar (probe?)-based system could be named after a famous meteorologist who pioneered sampling the immediate tornado environment.
 
Why not use the best instrument you have? It's kind of like when measuring competitive swimming times changed from stop watches to electronic timing systems. Did that corrupt the database of swimming records? I hardly think so; it just improved the quality of the measurements going forward.
 
In tis case the NWS is missing the boat. Why not use all avalible data to get the most accurate intensity of a tornado you can? Nothing is perfect but it is silly for them to now say the EF rating never should have been increased due to protocol that needs to be updated.
 
Speaking of changing, or calling into question, tornado ratings based primarily on damage indicators....

http://www.joplinglobe.com/topstori...ineers-release-study-of-Joplin-tornado-damage

The ASCE report says that, due to lesser quality building construction that made structures vulnerable at lower wind speeds, NO damage truly indicative of EF-5 level winds could be found; that only 4 percent of the damage reached EF-4 proportions; and that 83 percent of damage was caused by winds of EF-2 or less.

Does this mean Joplin was not "really" an EF-5 after all? I wouldn't jump to that conclusion. The NWS does not intend to change its rating, and there were (from what I understand) other DIs besides structural damage (e.g., concrete parking blocks moved/tossed).

But this report does open up a whole new can of worms. If a tornado initially rated EF-3 can be upgraded based on data uncovered later, then I'd think it would also be possible to downgrade an EF-5 or EF-4 based on later data. Whether anyone would actually do this is another story.
 
If hard data indicates a tornado's EF rating should be lowered, then of course that is what should happen.

The EF rating by damage estimation is frankly dodgy at best, and is severely impacted by independent variables that need to be but can't be controlled, like differing localized construction standards. But that's always been a problem; we've put up with it because we had no other way to determine tornado wind speeds outside of guessing from the visible damage.

I really don't see why people whose motivation is scientific wouldn't jump on direct radar data like a free lunch. Which is what the local office correctly did in the case of El Reno. If the administration of NOAA actually has a policy ordering that the most accurate data is to be disregarded in favor of findings that conform the most to historical determinations - that just seems incredibly backwards to me. But I suppose it's to be expected - the administrators are administrators, not scientists, and are naturally more concerned with smooth sailing than anything else.

Damage estimates aren't going away of course; there's simply not enough money to put a DOW on every single tornadic storm; there are some WFO's that will likely never have one at all. But when it's there and has data to share, raid the commissary! If nothing else, having measured wind data will help us to adjust our damage estimates to be more accurate.
 
I keep hearing "Mobile DOW" and how great that it is where it's implemented. But if we are going to rate tornado damage on radar wind speeds, and we are only going to place real emphasis in mobile dows, then I don't see this is going to work out in terms of providing meaningful data for the future. The numbers will be skewed. If we can't use data from NWS stationary radar at least out to 75% of it's effective range, I don't see future data as being reliable because there will never be mobile dow on all the larger tornadoes across the US. The recorded data will be reliable, but the UNrecorded data that was "ignored" because a mobile unit was not on the storm will only serve to skew the existing newer data. (Oddly, some select twisters in OK and KS [where the mobile dows reside] will no doubt become "more severe" beginning with season x)

There are too many nuts out there looking for any excuse to implement more government regulations and taxes to counter "global climate change" that will be "supported" by the rapid escalation of tornado intensities in OK and KS if the weather science community is not careful in how they rate tornadoes moving forward.
 
I keep hearing "Mobile DOW" and how great that it is where it's implemented. But if we are going to rate tornado damage on radar wind speeds, and we are only going to place real emphasis in mobile dows, then I don't see this is going to work out in terms of providing meaningful data for the future. The numbers will be skewed. If we can't use data from NWS stationary radar at least out to 75% of it's effective range, I don't see future data as being reliable because there will never be mobile dow on all the larger tornadoes across the US. The recorded data will be reliable, but the UNrecorded data that was "ignored" because a mobile unit was not on the storm will only serve to skew the existing newer data. (Oddly, some select twisters in OK and KS [where the mobile dows reside] will no doubt become "more severe" beginning with season x).

That might work if the WSR-88Ds that comprise the NEXRAD network had 500 m diameter antennae or were also X-band and placed every 30 km apart a la a CASA type network. But that is not likely to happen in our lifetimes. The WSR-88Ds are S-band with 1 deg. beamwidth. This means they pretty much cannot resolve any tornadoes that are not very close to the radar (probably less than 10-20 nmi for the largest of tornadoes...and the smallest of tornadoes will pretty much always be too small to be resolved). I think the El Reno tornado would've been directly sampled had the beam been lower. Given its peak size, there were easily several consecutive azimuths that covered the tornado. However, the beam was above 2000 ft ARL, so likely the beam was sampling the low-level mesocyclone and not the tornado itself.
 
The decisions on how best to pursue science shouldn't even consider how resulting data would be used for this or that political purpose. It does not belong in the discussion.

What needs to be done:

Continued use of damage estimates and the EF scale

Use of mobile radar to determine which actual measured wind speeds are associated with which damage-derived EF ratings

According revision of EF scale wind speeds

According adjustment (if necessary) of historical ratings
 
Speaking of changing, or calling into question, tornado ratings based primarily on damage indicators....

http://www.joplinglobe.com/topstori...ineers-release-study-of-Joplin-tornado-damage

The ASCE report says that, due to lesser quality building construction that made structures vulnerable at lower wind speeds, NO damage truly indicative of EF-5 level winds could be found; that only 4 percent of the damage reached EF-4 proportions; and that 83 percent of damage was caused by winds of EF-2 or less.

Does this mean Joplin was not "really" an EF-5 after all? I wouldn't jump to that conclusion. The NWS does not intend to change its rating, and there were (from what I understand) other DIs besides structural damage (e.g., concrete parking blocks moved/tossed).

But this report does open up a whole new can of worms. If a tornado initially rated EF-3 can be upgraded based on data uncovered later, then I'd think it would also be possible to downgrade an EF-5 or EF-4 based on later data. Whether anyone would actually do this is another story.

You're right, there were concrete parking stops near St. John's ripped up and thrown as far as 50 or 60 yards as well as several manhole covers pulled up and thrown. It makes you wonder how many homes of "superior construction" are actually out there. I'd guess not many at all.

Also keep in mind the direct quote from the EF-scale proposal.

The technology of portable Doppler radar should also be a part of the EF Scale process, either as a direct measurement, when available, or as a means of validating the wind speeds estimated by the experts.
 
You could also make a convincing analogy to hurricanes, which are sampled and rated based on data from dropsondes and satellites. Even if a Category 5 in the middle of the Gulf weakens to a depression before landfall (doing no damage), it still goes on the books as a 5, simply because it truly was that intensity at some point in its life.
 
What we have here is a clash between the desire to use the best data possible (using direct measurements where available) versus the desire to achieve climatological consistency.

Grazulis said it best: "With consistency, one can at least understand the problems with the data." (The Tornado: Nature's Ultimate Windstorm, Ch. 7, p.144)
 
This is a good debate and it's kind of a fun problem to try to solve. I lean more to appending the current formula using the best info we have and maybe developing a population impact rating so older data doesn't get tarnished and it gives you the ability to use newer data sources like the mobile radar when it is applicable, plus shows the impact the tornado had on the populus as a whole.

Example: Use the higher of the EF damage rating or radar scanned rating (if available) and add the population impact rating (scale of 1 to 5) then divide it by two. So if you have a gigantic tornado in the middle of nowhere and somehow gets an EF-5 damage rating or radar indicated winds rating, but does relatively little damage and gets a population impact rating of 1 then it is rated a 3. If it tears through a city or town and does EF-3 damage, maybe it gets a population impact rating of 5, so the overall rating would be 4.
 
You could also make a convincing analogy to hurricanes, which are sampled and rated based on data from dropsondes and satellites. Even if a Category 5 in the middle of the Gulf weakens to a depression before landfall (doing no damage), it still goes on the books as a 5, simply because it truly was that intensity at some point in its life.

The only difference, of course, is that all hurricanes can be monitored. That isn't at all the case with tornadoes. But still, I think accuracy is more important than consistency with a scale that isn't consistent to begin with. Not to mention that the EF-scale proposal explicitly stated that radar data should be used when available.
 
Back
Top