Radar display - blocky, smooth or gradient smooth

I'd like to discuss what others on here prefer when viewing severe weather.

Is one more accurate than the other? Or is it just a matter of personal preference?

It seems like most tv weather stations are using gradient smoothing these days. Is this more of a gimmick because it looks better for the viewing audience than viewing blocky radar display?

For Level 3 stuff I prefer a little smoothing like on GRL3. WXWORX is WAY too simplified smoothing and not enough levels of color.
Smoothing NIDS data is not very useful, because of the low resolution used to begin with. So you're making something already a bit smoothed even smoother. Doing the same with Level II is much more accurate because of the precision in L2 data.

Whether smoothed or not - you aren't look at reality. Storms are not made of 1km x 1deg bins. So showing them blocky is an accurate representation of the data, but not the storm. Smoothing may show a better idea of what's really going on if the routines are done well and it's using Level II data.
Smoothing.... no thanks. Can't stand it. I'd rather see the raw data itself. Placing the data into bins is neccesary for radar display.

Originally posted by Aaron Kennedy
Smoothing.... no thanks. Can't stand it. I'd rather see the raw data itself. Placing the data into bins is neccesary for radar display.


I agree. I like looking a pixels, for some reason I can "decipher" it better if I see pixel for pixel the actual data values.
With smoothing you always lose information. Smoothing is basically creating a matrix of systematically averaged points, creating a "blurring" of the original data. It's extremely difficult to do this without losing an appreciable amount of information.

The resolution of the WSR-88D data isn't optimal to begin with, so doing any sort of smoothing degrades interpretation. On reflectivity, a few bins may make all the difference in the identification of a process going on in the storm. Smoothed images may look good, but that's about it.

I suppose you lose information, but isn't smoothed data more realistic? I've never seen one of those rectangular rain shafts.
With smoothing you always lose information. Smoothing is basically creating a matrix of systematically averaged points, creating a "blurring" of the original data.
Especially with radial velocity data. Determining exact gate-to-gate shears requires that the data remains in its native polar coordinate format.

Case-in-point - NWS mets demand that the data remain in their highest resolution possible (i.e., "8-bit" data) in order to better diagnose severe weather signatures and trends for warning decision making. It's the best way to grow expertise. Word to chasers - learn from the best.
I suppose you lose information, but isn't smoothed data more realistic? I've never seen one of those rectangular rain shafts.

Storms are not rectangular, obviously. You lose information from what the storm looks like in reality regardless of smoothing use. Just the nature of how the NEXRAD turns the radar returns into pixels causes data loss.

Some types of smoothing aren't exactly "blurring" the data. The way GR3 "smooths" the data is more of an interpolation where each pixel is interpolated based on the surrounding pixels.

Smoothed images may look good, but that's about it

This is not just eye-candy. There have been several events where smoothing has personally made it easy to find features that i would have been unable to see unsmoothed. The smoothing algorithms are more of a reconstruction function than eye-candy... that was their intention and it does its job in many cases.

Its very clear that both of these representations of the data are not going to be perfect representations of the storm. There really is no way to declare either of them as the "best" way, as they both change the data in their own ways. This is why some programs make it very easy to turn it off and on, as needed.
"Word to chasers - learn from the best."

That's why we are letting people know that interpolation, especially from GR software, can add value...

Many smoothing routines are not good, but if you want the best representation of what the storm really looks like - interpolation done right can be better than bin data.
I have seen a smoothing feature on someone elses computer, where it's smoothed SO much that you can not see any little details because all the colors are blended together. On the other hand there are good smoothings that still show detail, and do not blend the colors at all. I will usually use smoothing, the type that preserves the most detail, but I will alternate to raw data as well.

With Consumer level software that is available to "most" Chasers, How do you know if the smoothing is "done right"?

Granted, smoothing certainly has a better visual appeal, but the pixel blocks seem to give a better representation of what's going on with the storm. Maybe a better description would be "I can pick out the high lights better with standard views". Of course I'm looking at Level III information rather than the higher resolution Level II.

As a Chaser in the field, just how accurate do you really need your information to be? You're not going to get it in "real time" unless you have your own Doppler (we won't go there!) of the five minutes old info from NOAA. If your close in to a storm, your eyes and ears will give you the best information available.

Now if your trying to guide a Chaser into a storm, then obviously Level II is better "if" you can get it fast enough to give good updates. Again, though, the Chaser you are reporting to is going to have to make decisions based on what he/she is seeing at the time on the ground. These decisions are based on experience and knowledge of strom evolution. No radar, or software will ever change that.

If I'm setting up a radar display for someone with no knowlegde of radar and how it works, it would be smoothing. If I'm setting up for data and guidance, it's pixel blocks.
Interesting discussion. I will be sure to include both smoothed and unsmoothed images when I do my writeups to make everyone happy.

As somebody who's computer sucks enough that smoothing isn't an option on GR Level 3 for me, I use raw data constantly and it works out just fine for me.

On the other hand, IMO smoothing looks nice for case studies (GR Level 2...don't ask me why I can smooth in one but not another because I have no clue) but I will be sure to toggle it on and off for those with different opinions.
It seems to me that there's a vibe that some of us are bashing GR3... that is not true, and it's a great program. I think it might help if I explain what is going on with a graph, as shown below:


The X axis is distance and the Y axis is radar reflectivity; so here we're seeing a cross section through a storm. Overlaid in black are different kinds of wave functions we generate in order to map and color the intermediate pixels at random points within the storm.

Graph A is what we would see if we smoothed the data in a linear fashion between the points, with no data loss at all. This is 100% accurate at the reflectivity bin centers, but a matrix of the data would still show distinct blockiness of the reflectivity bins. Also since the storm its isn't blocky, this scheme is unrealistic -- common sense shows that there are significant computed errors in between the reflectivity bins.

Graph B is the result if we apply smoothing. This creates a sort of wavelike function and produces nice output. It also more closely models the "shape" of the storm. But here we can't get the wave to exactly match the data at reflectivity bin center points -- which is a requirement of accurately reproducing the storm shape, otherwise we're just getting a computer-generated guess. We can get very close, though, and get a really accurate model if we apply a lot of computational power and try techniques like error correction (where we map out the difference, smooth that, and "correct" the first guess). There is a whole branch of mathematics dedicated to this kind of thing, starting with simple things like Haltiner smoothers.

The question here is probably whether the observed errors in GR3 are significant (I'm speaking of "errors" mathematically, GR3 is a good program). If we're talking velocity products, then yes, small errors are going to be completely unacceptable. With reflectivity, though, it is arguable over just how much mathematical error is acceptable, and perhaps whether the "right brain" can compensate, having seen so many storm shapes in the past.

There also seems to be a question of how accurate we need the data to be. That's a red herring, as the data coming from the storm is accurate; it's just that the reflectivity bins can be volumetrically large and come with the typical limitations of any radar data. It is possible to work within that constraint and reproduce that accuracy; the question is how to do it, how much processing power to apply, and whether the displayed errors of a given scheme are significant enough to be of concern to a forecaster. Also the question is whether to do it within the brain and visualize the raw bins themselves, or work with the smoothed data. Since no one has really done any formal studies on this yet it will probably remain a matter of personal taste.


I dont think anyone thinks you are necessarily attacking GR3. Its that we are trying to say that it "smooths" data differently than you think it does. Its not so much a blending or a blurring as the word "smoothing" implies. its an interpolation.

From my understanding of how radar data is retrieved and displayed, it is already messed up to an extent, as the storms are not pixelated. Each radar pixel is actually made up of areas that are higher and lower than what is displayed, it is just kinda averaged for the pixel displayed.

Also in your graph you show the black line cutting off certain colors and changing them. GR3 doesn't change any pixel's color / DBZ in any way.... it just changes how far out it goes from the center of that pixel depending on the neighboring data. That is the same with velocity data. It never changes the intensity of the gate-to-gate. Its not smoothing across the polar scale or lower each of the intensities to make it meet in the middle.
Thats why this type of data interpolation actually has value unlike the "smoothing" in programs like Stormpredator.
Fact of the matter: When I'm looking at phenomena that are finer scale than the resolution of the radar, I don't want to look at what an algorithm thinks the storm should look like. I want to see the raw data. Then let ME figure out whether it should be interpolated. I've played with the smoothing on GR2... it makes it much harder to diagnose what reflectivity is doing (for example: advection of hydrometeors in a hook echo).

One place I do have to use smoothing, however, is when I process volume scans to create isosurfaces of reflectivity. We end up using a Barnes weighted adjustment scheme with a radius of influence depending on how far away the storm is from the radar. Without this process, you end up with some nasty looking 3d pictures of supercells ; )

Scott --

Good point -- it sounds like you're saying that GR3 is doing the linear interpolation (as with Graph A in the illustration). If so, then there is no data loss, which is good. I was assuming that it was doing a Graph B smooth.

Some thoughts about the "A" scheme:
* It retains peaks and troughs perfectly, which is great
* It doesn't use a wave function, so it does not accurately model the shape of the storm and has mathematical interpolation (and gradient) errors.
* It's difficult to know which pixel you're looking at actually represents the real data; as stated, the interpolated ones have large mathematical errors

When I get a chance to work with Digital Atmosphere some more I'll have to do some experiments on this.

The best way to describe it would be this picture that Mike uses to describe the "smoothing" technique.


The left picture would be the pixelated non-smoothed, and the middle one is the same technique as used by GR3.
Its obviously not perfect since you cant create resolution where it doesn't exist, but its better than just blurring it. It makes a fairly good representation of what the picture should be.
As always, the interpolation should be taken with a grain of salt, as it is just how the radar sees the storm - same with unsmoothed. Just two different ways of displaying an infinite resolution storm with a finite resolution.
I've never understood the aversion to smoothed radar data displays by mets, so long as the center volume data values are preserved. Basically - you just need to use a very small radius for the interpolation to avoid smearing the data center values. There is a suggestion here by some that 'raw' data (of course, we never see the raw data) is 'real', which of course is not true. Real storms aren't digitized. Also, the value reported for each radar volume isn't a clean representation of just the particles within that volume since the pulse power isn't evenly distributed and wholly contained within the volume area that volume value represents, and the further away from the radar the volume is the less representative it is. If you are smoothing a level III radar data set - significant additional pixelation has already been added to the level II data such that I can hardly see where smoothing does any additional harm of note. Unless you are looking at specific data volumes of level II data (in particular, velocity data), I see nothing gained or lost of any real importance regardless of choice.

The only time I can think of where it is really meaningful not to have smoothed display is when you are looking at velocity signatures. I say that not because I think the smoothing corrupts the velocity data if done correctly, but when I'm trying to find the point with the largest shear, it's easier to see with larger regions of the same color next to each other. The meanigfulness of that gate-to-gate shear is often lost (there is a great paper on this topic by Wood and Brown in WAF, 1997) The 'binned' raw data volumes displayed from level II data are convenient for using as a spatial scale - and I miss that when zoomed in with smoothing.

Personally, I don't use smoothing because I don't care for the look of it. I'm just too used to seeing radar data displayed the other way. That said - I don't think it makes a bit of difference for a storm chaser looking to find the general location of a storm, it's intensity, and whether or not a storm has a strong mid-level mesocyclone or not. All of those features will be easy to identify with smoothing or not.

For me, I like smoothing but it all depends on the colour palette used. some palettes make it hard to discern the strength of a storm (break points that are ill-defined). With GRx software, one can write a palette that defines solid colours tied to certain (range of) dBZ's which will give more of a banded appearance. With L3 data, I would probably say that the SolidColor approach (default base reflectivity palette in GRLevel3, for example) would result in the greatest accuracy. When the "Color" attribute is used, then interpolation comes into play. It can also be argued that accuracy with a "color" vs. "solidcolor" based palette can be reduced, especially with L3 data, but is sure makes the data displayed much more attractive :).
Now when velocity is concerend, I have that UNchecked on both GR3 and 2. I don't want velocity data interpolated (or smeared) at all.
I do periodicaly uncheck smoothing on both clients to sometimes get a better grasp on a storm.
Examples forthcoming. 1st is the 5/3/1999 Moore / OKC supercell with the new NWS palette which has the "SolidColor" attribute in the palette, unsmoothed.

The same palette with smoothing...

Then there is my "severe weather" palette which has the color (vs solidcolor) attribute. First example is unsmoothed.

Then the smoothed version...
One issue about smoothing to consider is that there are numerous algorithms out there that can be used to filter data. Gaussian, Cressman, Median, Percent, Scale, Oriented, Erosion, and Dilation filters are examples. Also, kernel size and shape can be varied, as can the the kernel coordinate systems (polar versus cartesian). WDSSII offers many of these varieties of filters to try out (http://www.wdssii.org - free for .edu and .gov).

All of these are going to result in different answers. But for detailed analysis of radar data for NWS warning ops, I prefer to stick with the native unsmoothed data.