Radar display - blocky, smooth or gradient smooth

Joined
May 18, 2005
Messages
35
I'd like to discuss what others on here prefer when viewing severe weather.

Is one more accurate than the other? Or is it just a matter of personal preference?

It seems like most tv weather stations are using gradient smoothing these days. Is this more of a gimmick because it looks better for the viewing audience than viewing blocky radar display?

Thoughts?
 
For Level 3 stuff I prefer a little smoothing like on GRL3. WXWORX is WAY too simplified smoothing and not enough levels of color.
 
Smoothing NIDS data is not very useful, because of the low resolution used to begin with. So you're making something already a bit smoothed even smoother. Doing the same with Level II is much more accurate because of the precision in L2 data.

Whether smoothed or not - you aren't look at reality. Storms are not made of 1km x 1deg bins. So showing them blocky is an accurate representation of the data, but not the storm. Smoothing may show a better idea of what's really going on if the routines are done well and it's using Level II data.
 
Smoothing.... no thanks. Can't stand it. I'd rather see the raw data itself. Placing the data into bins is neccesary for radar display.

Aaron
 
With smoothing you always lose information. Smoothing is basically creating a matrix of systematically averaged points, creating a "blurring" of the original data. It's extremely difficult to do this without losing an appreciable amount of information.

The resolution of the WSR-88D data isn't optimal to begin with, so doing any sort of smoothing degrades interpretation. On reflectivity, a few bins may make all the difference in the identification of a process going on in the storm. Smoothed images may look good, but that's about it.

Tim
 
I suppose you lose information, but isn't smoothed data more realistic? I've never seen one of those rectangular rain shafts.
 
With smoothing you always lose information. Smoothing is basically creating a matrix of systematically averaged points, creating a "blurring" of the original data.
Especially with radial velocity data. Determining exact gate-to-gate shears requires that the data remains in its native polar coordinate format.

Case-in-point - NWS mets demand that the data remain in their highest resolution possible (i.e., "8-bit" data) in order to better diagnose severe weather signatures and trends for warning decision making. It's the best way to grow expertise. Word to chasers - learn from the best.
 
I suppose you lose information, but isn't smoothed data more realistic? I've never seen one of those rectangular rain shafts.

Storms are not rectangular, obviously. You lose information from what the storm looks like in reality regardless of smoothing use. Just the nature of how the NEXRAD turns the radar returns into pixels causes data loss.

Some types of smoothing aren't exactly "blurring" the data. The way GR3 "smooths" the data is more of an interpolation where each pixel is interpolated based on the surrounding pixels.

Smoothed images may look good, but that's about it

This is not just eye-candy. There have been several events where smoothing has personally made it easy to find features that i would have been unable to see unsmoothed. The smoothing algorithms are more of a reconstruction function than eye-candy... that was their intention and it does its job in many cases.

Its very clear that both of these representations of the data are not going to be perfect representations of the storm. There really is no way to declare either of them as the "best" way, as they both change the data in their own ways. This is why some programs make it very easy to turn it off and on, as needed.
 
"Word to chasers - learn from the best."

That's why we are letting people know that interpolation, especially from GR software, can add value...

Many smoothing routines are not good, but if you want the best representation of what the storm really looks like - interpolation done right can be better than bin data.
 
I have seen a smoothing feature on someone elses computer, where it's smoothed SO much that you can not see any little details because all the colors are blended together. On the other hand there are good smoothings that still show detail, and do not blend the colors at all. I will usually use smoothing, the type that preserves the most detail, but I will alternate to raw data as well.
 
Rob,

With Consumer level software that is available to "most" Chasers, How do you know if the smoothing is "done right"?

Granted, smoothing certainly has a better visual appeal, but the pixel blocks seem to give a better representation of what's going on with the storm. Maybe a better description would be "I can pick out the high lights better with standard views". Of course I'm looking at Level III information rather than the higher resolution Level II.

As a Chaser in the field, just how accurate do you really need your information to be? You're not going to get it in "real time" unless you have your own Doppler (we won't go there!) of the five minutes old info from NOAA. If your close in to a storm, your eyes and ears will give you the best information available.

Now if your trying to guide a Chaser into a storm, then obviously Level II is better "if" you can get it fast enough to give good updates. Again, though, the Chaser you are reporting to is going to have to make decisions based on what he/she is seeing at the time on the ground. These decisions are based on experience and knowledge of strom evolution. No radar, or software will ever change that.

If I'm setting up a radar display for someone with no knowlegde of radar and how it works, it would be smoothing. If I'm setting up for data and guidance, it's pixel blocks.
 
Interesting discussion. I will be sure to include both smoothed and unsmoothed images when I do my writeups to make everyone happy.

As somebody who's computer sucks enough that smoothing isn't an option on GR Level 3 for me, I use raw data constantly and it works out just fine for me.

On the other hand, IMO smoothing looks nice for case studies (GR Level 2...don't ask me why I can smooth in one but not another because I have no clue) but I will be sure to toggle it on and off for those with different opinions.
 
It seems to me that there's a vibe that some of us are bashing GR3... that is not true, and it's a great program. I think it might help if I explain what is going on with a graph, as shown below:

smoothing.gif


The X axis is distance and the Y axis is radar reflectivity; so here we're seeing a cross section through a storm. Overlaid in black are different kinds of wave functions we generate in order to map and color the intermediate pixels at random points within the storm.

Graph A is what we would see if we smoothed the data in a linear fashion between the points, with no data loss at all. This is 100% accurate at the reflectivity bin centers, but a matrix of the data would still show distinct blockiness of the reflectivity bins. Also since the storm its isn't blocky, this scheme is unrealistic -- common sense shows that there are significant computed errors in between the reflectivity bins.

Graph B is the result if we apply smoothing. This creates a sort of wavelike function and produces nice output. It also more closely models the "shape" of the storm. But here we can't get the wave to exactly match the data at reflectivity bin center points -- which is a requirement of accurately reproducing the storm shape, otherwise we're just getting a computer-generated guess. We can get very close, though, and get a really accurate model if we apply a lot of computational power and try techniques like error correction (where we map out the difference, smooth that, and "correct" the first guess). There is a whole branch of mathematics dedicated to this kind of thing, starting with simple things like Haltiner smoothers.

The question here is probably whether the observed errors in GR3 are significant (I'm speaking of "errors" mathematically, GR3 is a good program). If we're talking velocity products, then yes, small errors are going to be completely unacceptable. With reflectivity, though, it is arguable over just how much mathematical error is acceptable, and perhaps whether the "right brain" can compensate, having seen so many storm shapes in the past.

There also seems to be a question of how accurate we need the data to be. That's a red herring, as the data coming from the storm is accurate; it's just that the reflectivity bins can be volumetrically large and come with the typical limitations of any radar data. It is possible to work within that constraint and reproduce that accuracy; the question is how to do it, how much processing power to apply, and whether the displayed errors of a given scheme are significant enough to be of concern to a forecaster. Also the question is whether to do it within the brain and visualize the raw bins themselves, or work with the smoothed data. Since no one has really done any formal studies on this yet it will probably remain a matter of personal taste.

Tim
 
Back
Top