Dan,
I forwarded your inquiry to Randy Zipser and Dr. Joe Golden, both of whom were very active in photogrammetric studies of tornadoes during the heyday of tornado wind speed research at NSSL in the early to mid 1970's. Randy, whom also originally co-created the physical Stormtrack newsletter with Dave Hoadley in 1977, had this in-depth response to your question. With his permission, I've posted the entire message here:
If you have a known camera position and a known tornado location, why isn't photogrammetry capable of measuring windspeeds in tornadoes and used regularly? The main drawback is of course you can't see interior vortices inside of thick condensation or debris, but any visible debris or condensation features should be usable to prove the *minimum* wind velocity. Anyone know more abou this?
I think that the answer to the question [Dan] posed in the screenshot lies in the rapid advancement in technology.
Back in the 1970s, both radar, instrumentation, and satellite technology was much less sophisticated; thus the tools that were being used in tornado research back then were much more primitive (at least by today’s standards). Tornado chasers in the early days of organized storm chasing (including the NSSL chase program) had to use bulky mobile phone units (we called them ‘bricks’) to frequently “check in” to get guidance from the radar meteorologist at NSSL. By the mid-1990s, chase vehicles were fully equipped with not only continuous wireless communication, but also on-board real-time radar [e.g., Project VORTEX and the Doppler on Wheels (DOW)].
The same thing was happening with photogrammetry over this two-decade period. Back then, we were using 8mm and 16mm celluloid movies, whose images were often blurry and very shaky, and we employed very primitive manual tracing methods using the projection of these film images onto tracing paper attached to a darkroom wall as a screen! As you can imagine, errors in windspeeds were quite large, although difficult to quantify without statistical, objective-analysis routines (which also were very cumbersome to employ with photogrammetric analysis methodology). By the late 1990s, however, computer technology had advanced sufficiently so that reasonably-accurate, interactive algorithms had been developed to allow all these basic manual steps to compute tornado windspeeds on a personal computer, thus saving the researcher a tremendous amount of time and manual input. Just imagine what lies ahead using the evolving AI technology we now have!
The reason that we don’t hear much nowadays about photogrammetrically-derived windspeeds is because these can be determined to a high degree of accuracy directly from very-clear, gyroscopically-stabilized video imagery in a matter of seconds by a computer equipped with proper analytical software. Like all trends with advancing technology, tornado windspeeds are no longer the “mystery’ that they once were back in the nascency of tornado research. The “EF” tornado windspeed scale has also evolved with these same advancements in technology, combined with advancements in structural wind-engineering research over the same period (over a half-century now).
As Joe may still recall, during the defense of my master’s thesis, my thesis committee asked me to defend a statement I had made that “tornado windspeeds will likely not exceed 350 mph,” with little or no objective proof to justify that statement. I made that statement back in 1976, based solely upon verbal communications and largely-unpublished material from other researchers I had informally contacted (Jerome Blechman at the U of Wisconsin, and the Texas Tech Wind-Engineering group, for example). In those early days of tornado structural-damage research, the “gut feeling” at that time was that most tornado damage that was being observed could be explained with windspeeds at or below the 300 mph velocity range, as compared with a prevailing thought that tornado winds could possibly exceed 500 mph or even Mach 1. There was even some talk out of Fujita’s camp about adding an “F-6” category!
Luckily for me, my statement has successfully stood the test of time as tornado structural research has proven this observation to be accurate, so much so that Fujita’s original “F-scale” was replaced by the “Enhanced Fujita” (EF) scale in subsequent years, reflecting an overall reduction in tornado windspeeds based upon observed tornado structural damage. The EF scale is still in use today.
On May 3, 1999, Dr. Josh Wurman and the DOW research crew recorded a boundary-layer (near surface) windspeed of 318 mph. That is still the standard used 25-years later for “maximum” windspeed expected for an EF-5 tornado. Of course, there is no reason to expect that this “maximum windspeed” won’t be superseded in the future, but it has also stood the test of time.
All In all, we have a pretty good handle presently about tornado windspeeds, just as we know very well today why tornadoes form, how they form, and how and they are structured. The next frontier of research will be how to prevent them from forming in the first place, something that is likely a very long way off, due to the scale of forces at work. But with human ingenuity, we can never say “never”...
Randy Zipser