NWS Joplin Service Assessment is Out

  • Thread starter Thread starter Mike Smith
  • Start date Start date
As an "end user" we have no control over the VCP, so I couldn't answer that myself... However the team that came to the conclusion are all NWS mets who deal with VCP issues daily, so I'll take their word for it. I think your two "options" are close to the same.
 
Having spoken with at least ten GA and airline pilots along with one FAA aviation weather official, all say they would gladly give up WSR-88D products like layer composite reflectivity and 5 minute echo tops in return for more frequent volume scans in the lower levels (i.e., tilts 0.5° to 3.5°).

I think it is past time for the Tri-Agencies to reevaluate the volume scan strategy. We need more frequent lower level scans in major tornado situations (where planes are avoiding the area based on their on-board radars) and we cannot wait for phased-array radar.
 
Last edited by a moderator:
Joplin was badly hit in an area where there was a lot of housing built some years ago to not the most stringent of standards, and the city generally lacked basements to act as shelters. Realistically though could anything survive an F5? Well I watched a tv broadcast around the same time - not sure of the place - but a tornado had just passed over and reduced everything to matchsticks. Then just as the helicopter pilot zoomed in on the damage, a door in the ground opened and a family emerged to gaze at the devestated landscape.

Dr Forbes of TWC has joined those voices calling for more tornado shelters. Yet as I pointed out in an earlier posting, when someone canvassed about the lack of shelters in Joplin just before the tornado hit, there was a lot of hostility to spending any public money on them. There were indeed voices saying they would rather be responsible for their own lives than see public money spent on shelters. And when you look at the debate on shelters there is a lot of misinformation on how much they cost, where they can be built etc.

Another way forward in future is to tighten building codes in at risk areas both in type of construction and use of shelters. After all they have earthquake codes in LA and hurricane codes in Florida. No point we are told by the housebuilding and insurance sectors, as a tornado rarely if ever hits the same place twice and your chances of being a victim are infinitesimal. Maybe that used to be true but there are more people now living in At Risk areas like "Dixie Alley". Following on recording of tornadoes in recent years, we are starting to develop some accurate statistical data about their impact. Shouldn't actual and potential casuaty figures also be factored in?
 
Just to add, in the news yesterday: different risk, similar mindset

"A British swimmer lost both legs after he was attacked by a great white shark off a South African beach after ignoring warning signs. It was understood that shark warning flags were flying on the beach. However, a siren that was normally sounded to warn of predators did not go off because there was a power cut at the time.

Witnesses said three large great whites had been seen in the area 90 minutes before the attack and on previous days. Shark spotters employed by the local council said Mr Cohen often swam at the beach and had been warned in the past about the presence of sharks."
 
Having spoken with at least ten GA and airline pilots along with one FAA aviation weather official, all say they would gladly give up WSR-88D products like layer composite reflectivity and 5 minute echo tops in return for more frequent volume scans in the lower levels (i.e., tilts 0.5° to 3.5°).

I think it is past time for the Tri-Agencies to reevaluate the volume scan strategy. We need more frequent lower level scans in major tornado situations (where planes are avoiding the area based on their on-board radars) and we cannot wait for phased-array radar.

I think you are oversimplifying the situation. Volume scans are doing a lot more than just creating the additional products. High resolution numerical models, such as the HRRR, are using radar data assimilation in their initial conditions. Losing volume scans hurts that. Not to mentions forecasters can use these additional levels in warning environments.

Looking at upper tilts also offers forecasters a view as to "what's coming". Strengthening cores aloft can indicate a strengthening updraft and help forecasters anticipate where there next problem areas will be. if we cut out the upper tilts in favor of more lower tilts, we run the risk of forecasters becoming so focused on event A they miss event B. If you don't believe this happens, just take a look at the OKC television stations during the 24 May 2011 outbreak. (Yes, it isn't NWS personnel, but the narrow-sightedness is still applicable.) Another application of using upper tilts is I a cyclic tornado situation. As the old mesocyclone occluded and weakens and the knew mesocyclone begins to take shape downstream, you can often times see the parent updraft (as inferred by dBZ values on upper tilts) redevelop downstream before the velocity field responds. This gives a good radar analyst additional lead time in knowing where the next tornado might develop.

Then there is the case of a weak tornado vs. a devastating hail storm. do we focus on the tornado? The more devastating hail storm? Who is going to make that decision? Who bears liability for a wrong decision?

Another concern with moving toward just "spinning the radar faster" for faster updates is that this increases the wear and tear on the radar immensely. This drives up the maintenance costs and increases the chances the radar might break during an event. Not to mention that faster spinning of the radar decreases the radar's sensitivity, meaning things site as outflow boundaries, gust fronts, etc. could be missed. When examining a mesocyclone, you better believe I want to know if the RFD has undercut it.

Each local office determines what VCP in which to place their radars. This is a rare instance where the bureaucracy is letting decisions be made in the field by those affected the most by the decisions. If you want something changed, we should be advocating for better training of forecasters on what the differing VCPs have to offer. Unfortunately, the NWS is being forced to go the other way and cut back on training due to budget issues.
 
Patrick,

We respectfully disagree on this issue.

If it is vital to initialize the mesoscale models (NCEP does not seem to agree since they frequently do not run or are far behind others), then SGF (to use May 22 as an example) can send up a rawinsonde every 1 or 2 hours whenever the very rare combination of >4000j of CAPE and SRH >175 presents itself. That would better (since temperature, humidity and pressure would be added) analyze initial conditions for the model than radar alone. The more distant locations (i.e., Pittsburg, KS, Lake of the Ozarks, West Plains, FYV, etc.) under the radar's umbrella would still be measured by the lower tilts are they are now.

You do not need to spin the radar faster to get more low-level coverage if you only use the lower-four tilts. No additional wear and tear.

Then there is the case of a weak tornado vs. a devastating hail storm. do we focus on the tornado? The more devastating hail storm? Who is going to make that decision? Who bears liability for a wrong decision?

Two comments: I'm not advocating doing this in the situation you pose above. Presumably, a local NWS office is sufficiently threat-aware to know the difference between a May 22 and, say, the giant hail storm in ICT on Sept. 18, 2010.

Second, I don't understand the seeming preoccupation with liability among government-employed meteorologists. It is very hard to get permission from a federal judge to sue the federal government and, almost certainly, a decision as to how the radar is run would fall under the federal government's "discretionary function exemption" to the Federal Tort Claims Act that says that the government cannot be successfully sued for using good-faith discretion (choosing one volume scan over another) in carrying out its operations.

If your concern is about private sector meteorologists, this scenario wouldn't give me two seconds' thought. I wouldn't see any liability at all from choosing to monitor for tornadoes more closely than a hailstorm in a situation similar to May 22, 2011.

Mike
 
Mike, I made many points and you seemed to skip over a lot of them...

If it is vital to initialize the mesoscale models (NCEP does not seem to agree since they frequently do not run or are far behind others), then SGF (to use May 22 as an example) can send up a rawinsonde every 1 or 2 hours whenever the very rare combination of >4000j of CAPE and SRH >175 presents itself. That would better (since temperature, humidity and pressure would be added) analyze initial conditions for the model than radar alone. The more distant locations (i.e., Pittsburg, KS, Lake of the Ozarks, West Plains, FYV, etc.) under the radar's umbrella would still be measured by the lower tilts are they are now.
A single rawinsonde gives information at a (theoretical*) point, whereas the radar can give this information over a much larger area. Furthermore, a rawinsonde every hour does not help with using EnKF for initializations. WoF work has shown that it takes a certain number of radar scans for the model to "take" it, this is one of the benefits of MPAR -- we can get the required number of volumes in less time. In any event, this is admittedly the weakest argument. I merely included it as an example of other purposes for volume scans than composite reflectivity...

* I said "theoretical point" because rawinsondes get ingested into the model at a single grid point, even though they are being advected around by the environmental wind and are not guaranteed to be valid at the point they are representing in the model. This is especially true near atmospheric boundaries -- where a few kilometers can make a huge impact on what the sonde observes.


You do not need to spin the radar faster to get more low-level coverage if you only use the lower-four tilts. No additional wear and tear.
You do if you want to keep the higher tilts, as I am advocating for. I use those higher tilts quite frequently -- even in tornado environments. A lot of useful and important information can be gleaned from them if people are willing to use them.



I'm not advocating doing this in the situation you pose above. Presumably, a local NWS office is sufficiently threat-aware to know the difference between a May 22 and, say, the giant hail storm in ICT on Sept. 18, 2010.
But what about a case like 10 May 2010 where Norman was under the gun with what became an EF-4 tornado, and 5"+ hail was falling in north Norman and Moore? Don't the residents of Moore deserve to be given a heads up that 5"+ hail is coming? Furthermore, environment aside, how can you tell from looking at radar data how strong a tornado might be? I've seen plenty of rotation signatures that were stronger than the Joplin one that didn't produce tornadoes. I've also seen signatures that were weaker produce strong tornadoes. Where do we draw the line? Maybe with environments like 27 April 2011 it might be easy to know, but the environment on 22 May 2011 was hardly screaming EF-5 tornado. And on 27 April 2011, what benefit would more frequent low-level updates have been? In that kind of environment you tend to think tornado warning first, severe second. Sure we'd be able to see the parent circulation move through cities with more temporal precision, but how would that have impacted the warning services?



Second, I don't understand the seeming preoccupation with liability among government-employed meteorologists. It is very hard to get permission from a federal judge to sue the federal government and, almost certainly, a decision as to how the radar is run would fall under the federal government's "discretionary function exemption" to the Federal Tort Claims Act that says that the government cannot be successfully sued for using good-faith discretion (choosing one volume scan over another) in carrying out its operations.
First of all, I'm not a government employed meteorologist, so I don't know what that first line is intended to suggest. Secondly, I never once mentioned lawsuits as that thought had never entered my mind. What about service assessments? Congressional inquiries? Investigations by the Inspector General? Even though people might not "lose" lose their jobs in the government over making poor decisions they can certainly be pushed out? Don't believe me? Ask the former MIC in Tulsa what happened to him after the 21 April 1996 Fort Smith, AR tornado.

At the very least these "investigations" are time consuming and a drain on all involved. They take away people from their normal job and tend to be a factor on the overall budget. So, yes, I think it is valid to be thinking about these things.


If your concern is about private sector meteorologists, this scenario wouldn't give me two seconds' thought. I wouldn't see any liability at all from choosing to monitor for tornadoes more closely than a hailstorm in a situation similar to May 22, 2011.
I'll be honest. I'm getting really frustrated at the number of people talking about how great the environment on 22 May 2011 was and how it was obvious to know that a major tornado was going to happen. Sure, in hindsight we know what happened, but that environment happens a lot more frequently than people realize without major tornadoes occurring. How are we to know that this time it's the real deal?

But let's assume for a second you are correct and we had turned the radars on for more low-level sampling. What good would that have done? Other than letting us see every 2-3 minutes instead of 4-5 that a tornado was possibly going through Joplin, what could we have done differently? Sure, maybe you could provide more specific warning guidance, but chances are those in the path won't get it as the power will probably have already been cut. Furthermore, do you trust the radar enough to tell people 2 miles north of the tornado they are safe? I certainly don't -- especially at the ranges most people are covered by the radars.

More frequent low-level updates have the greatest impact in marginal settings when a forecaster is on the fence regarding whether a tornado is developing -- not when the radar signature is already quite intense. So, once again I ask, how are you going to know ahead of time what impact more frequent low-level updates will have?


Ultimately, it is my opinion, that if you want more frequent low-level updates, and you are going to argue that aviation forecasters want the same, then re-evaluate the TDWR program. These are specifically designed for that purpose.
 
I'd be remiss if I failed to point out that CASA is attempting to develop scanning strategies that compromise between what Mike and I are advocating. They use an adaptive scanning strategy that gives more frequent low-level updates, but does complete a volume scan. This is done by dropping back down to lower tilts while completing the volume scan. The downside to this is the constant up-and-down does have an negative impact on the gears causing it to wear out more quickly.
 
I recall seeing rapid 0.5 degree scans on KOHX for a short time during the January 17-18 1999 event (RE: http://ftp3.ncdc.noaa.gov/pub/has/HAS002310086/ 0210z – 0220z). Was this a chosen scan strategy or some sort of problem with the 88D? I noticed velocity products were missing which makes me think it was a problem with the radar. This is the only time that I have seen this.

High resolution numerical models, such as the HRRR, are using radar data assimilation in their initial conditions. Losing volume scans hurts that.

Shouldn't the warning decision making process always have a higher priority than feeding a model with a full volume of radar data for a 6 hour forecast?

Not to mentions forecasters can use these additional levels in warning environments.

The warning decision maker's priorities change as an event evolves. In the Joplin case, I would argue that higher tilts would be essential and way more important in the minutes leading up to the Joplin tornado than when the tornado was actually occurring. The warning decision maker may suddenly shift his focus and wish for more rapid low-level scans as the event is ongoing, even if he has to sacrifice upper tilts.

As an alternative (and continuing with the Joplin case as an example), the WDM would still have KINX and KEAX radars for sampling higher up (0.5 degree elevation angle beam height from KINX is about 7500). Although not ideal because of beam widening and other limitations related to distance, it would still prove sufficient in providing useful data to the WDM and perhaps even the high resolution models during the period in which such a low-level scanning strategy was employed for nearby KSGF.

If you want something changed, we should be advocating for better training of forecasters on what the differing VCPs have to offer. Unfortunately, the NWS is being forced to go the other way and cut back on training due to budget issues.

Doesn't seem to be budget issues. WDTB has a very good module explaining which VCP to use, yet WDM's still make poor decisions that go against the advice in this training.

http://wdtb.noaa.gov/modules/vcpTraining/index.html

But let's assume for a second you are correct and we had turned the radars on for more low-level sampling. What good would that have done? Other than letting us see every 2-3 minutes instead of 4-5 that a tornado was possibly going through Joplin, what could we have done differently? Sure, maybe you could provide more specific warning guidance, but chances are those in the path won't get it as the power will probably have already been cut. Furthermore, do you trust the radar enough to tell people 2 miles north of the tornado they are safe? I certainly don't -- especially at the ranges most people are covered by the radars.

More frequent low-level updates have the greatest impact in marginal settings when a forecaster is on the fence regarding whether a tornado is developing -- not when the radar signature is already quite intense. So, once again I ask, how are you going to know ahead of time what impact more frequent low-level updates will have?

True. But, would more scans have increased confidence in the event being high-end? Would that have led to enhanced wording that did not happen (i.e., use of Tornado Emergency)?
 
I recall seeing rapid 0.5 degree scans on KOHX for a short time during the January 17-18 1999 event

There is a "trick" that can be used to stop and restart a volume scan, which gets faster 0.5 updates at the loss of volume data.

And realize that the VCP recommendation wasn't just regarding timeliness - but also the low level tilt coverage, plus the velocity issues. Neither of which would be fixed by dumping the volume.
 
I'll be honest. I'm getting really frustrated at the number of people talking about how great the environment on 22 May 2011 was and how it was obvious to know that a major tornado was going to happen. Sure, in hindsight we know what happened, but that environment happens a lot more frequently than people realize without major tornadoes occurring. How are we to know that this time it's the real deal?

But let's assume for a second you are correct and we had turned the radars on for more low-level sampling. What good would that have done? Other than letting us see every 2-3 minutes instead of 4-5 that a tornado was possibly going through Joplin, what could we have done differently? Sure, maybe you could provide more specific warning guidance, but chances are those in the path won't get it as the power will probably have already been cut. Furthermore, do you trust the radar enough to tell people 2 miles north of the tornado they are safe? I certainly don't -- especially at the ranges most people are covered by the radars.

The live NWS Chat on that afternoon/evening had to be extremely helpful for the local NWS meteorologists. I know that some of us were passing along information as it was either broadcast from chasers, media, other sources to the Springfield office. It quickly became apparent that this was a big event with widespread damage. Of course radar can only tell us so much about what a storm is or is not doing on the ground. This is a great example of where live information was as useful if not more useful than radar. There was little if any delay in the tower cam streams - the chaser streams - and even some ground truth damage reports.

Of course in some rural areas you don't have tower cams or other information - chasers/OEM reports. I would be curious to know what type of phone calls the NWS was receiving during this event - perhaps there is a log or recording of those calls? I wonder how quickly the damage reports were being received via phone calls to the office.

And you are most likely right (would have been a nice question to ask on the survey) - the power probably went out several minutes before the tornado hit. A battery backed NOAA All Hazards Weather Radio or am/fm radio would have been of great use during that time.

Even with that said - according to some studies most people don't shelter until the last 2 minutes prior to a storm hitting. If this is indeed the case then would it have mattered? Could people have driven out of the path (as we have seen in some strong - long tracked tornado events)? With it touching the ground and becoming a strong/violent tornado so quickly I doubt many could have fled or would have had time to flee.

There are some events that will always haunt meteorologists - this is one of them. Not to mention disheartening.
 
I don't know how easily it could have been done, but I think the service assessment would have benefitted from additional information regarding the actual availability of underground shelter as regards how many people needed shelter at the time the tornado hit. I know we're focused on warning interpretation and the resulting action people take, but, as has been said already, many people don't shelter until shortly before the tornado hits. In a city that size, if I had been in any given place and needed near-immediate underground shelter, how available would that have been to me? That's information I would've liked to see in the assessment, because with a tornado that size the immediacy and availability of shelter does play an important part in the resulting fatalities (or lack thereof).
 
The fact that the assessment found that people in SW Missouri were desensitized by too many warnings was not a surprise by any means. Mike, Patrick maybe you could answer this question. For years I have watched SVR warned storms from Tulsa or Wichita immediately go TOR warned when they hit the NWS Springfield area, why would Springfield so consistently go TOR warned when the other stations have not done so? And it has been a constant we could count down when the storm would get a TOR warning down to the minute. Does anyone review on a regular basis the calls by a local office?
 
I don't know how easily it could have been done, but I think the service assessment would have benefitted from additional information regarding the actual availability of underground shelter as regards how many people needed shelter at the time the tornado hit.

Remember a NWS service assessment is intended to assess how the NWS acted during an event, and how the warning dissemination process worked. You're asking for great info - but well outside the role of a team of meteorologists. I would hope/expect that funding was also spent on other non-NWS teams for that purpose, but it wouldn't be in a SA.
 
The fact that the assessment found that people in SW Missouri were desensitized by too many warnings was not a surprise by any means. Mike, Patrick maybe you could answer this question. For years I have watched SVR warned storms from Tulsa or Wichita immediately go TOR warned when they hit the NWS Springfield area, why would Springfield so consistently go TOR warned when the other stations have not done so? And it has been a constant we could count down when the storm would get a TOR warning down to the minute. Does anyone review on a regular basis the calls by a local office?


Yes, I believe what seems to be the history of over warning for TOR by the SGF office played a role here. The other was that, since 1973 when a derecho moved through JLN and caused quite a few injuries, the decision to sound the sirens for SVR. Between the two, the sirens went off in JLN far too often.
 
Back
Top