It seems to me that there's a vibe that some of us are bashing GR3... that is not true, and it's a great program. I think it might help if I explain what is going on with a graph, as shown below:

The X axis is distance and the Y axis is radar reflectivity; so here we're seeing a cross section through a storm. Overlaid in black are different kinds of wave functions we generate in order to map and color the intermediate pixels at random points within the storm.

**Graph A** is what we would see if we smoothed the data in a linear fashion between the points, with no data loss at all. This is 100% accurate at the reflectivity bin centers, but a matrix of the data would still show distinct blockiness of the reflectivity bins. Also since the storm its isn't blocky, this scheme is unrealistic -- common sense shows that there are significant computed errors in

*between* the reflectivity bins.

**Graph B** is the result if we apply smoothing. This creates a sort of wavelike function and produces nice output. It also more closely models the "shape" of the storm. But here we can't get the wave to exactly match the data at reflectivity bin center points -- which is a requirement of accurately reproducing the storm shape, otherwise we're just getting a computer-generated guess. We can get very close, though, and get a really accurate model if we apply a lot of computational power and try techniques like error correction (where we map out the difference, smooth that, and "correct" the first guess). There is a whole branch of mathematics dedicated to this kind of thing, starting with simple things like Haltiner smoothers.

The question here is probably whether the observed errors in GR3 are significant (I'm speaking of "errors" mathematically, GR3 is a good program). If we're talking velocity products, then yes, small errors are going to be completely unacceptable. With reflectivity, though, it is arguable over just how much mathematical error is acceptable, and perhaps whether the "right brain" can compensate, having seen so many storm shapes in the past.

There also seems to be a question of how accurate we need the data to be. That's a red herring, as the data coming from the storm

*is* accurate; it's just that the reflectivity bins can be volumetrically large and come with the typical limitations of any radar data. It is possible to work within that constraint and reproduce that accuracy; the question is how to do it, how much processing power to apply, and whether the displayed errors of a given scheme are significant enough to be of concern to a forecaster. Also the question is whether to do it within the brain and visualize the raw bins themselves, or work with the smoothed data. Since no one has really done any formal studies on this yet it will probably remain a matter of personal taste.

Tim