GRLevel3 vs. Mobile Threat Net

Originally posted by Tyler Allison+--><div class='quotetop'>QUOTE(Tyler Allison)</div>
<!--QuoteBegin-rdewey

It would be sweet to just get the data, and then use GRLevelWhateva to view the it.

Somehow I think Mr. B would frown on XM doing that ;)[/b]

LOL! Good point :lol: :lol: <-- Deserves TWO laughs

Too bad Baron sucks though, but I guess then it wouldn't be as funny :roll: :lol:
 
Baron's Threat Net is the most smoothed, over any other radar program I have used/seen. Even GRLevel3 does not smooth the data that much...it completely distorts everything, and can bring hidden details away from the main cell.
 
Originally posted by rdewey
Interpolation, Mr. Kahn... Interpolation... :lol:

Mike has been making it a point to note that GR indeed does interpolate, not smooth. Regardless, there ARE time when turning on "smoothing" helps you see storm structure more clearly. I wish a had a good example readily available, but there are times when turning on "smoothing" does bring out some storm structure. I used to be pretty critical of GR "smoothing", but I've actually grown to like it. Again, however, note that GRs "smoothing" is more like interpolation, since it's not smearing the data. LOL Ask Mike, I think he posted something about it over in the GR forums.
 
Last year was the 1st time I'd chased with any kind of radar in the car (the ThreatNet) - we found it very useful in homing in on storms as they were developing, and also for determining whether we could take on the forward flank and come out (fairly) unbruised! I agree that the detail is not great, and once you've got into position, it's usually much better to go visual and use your storm experience etc to follow the storm, but the ThreatNet is certainly a useful tool, IMO.
 
I wonder what the bandwidth of one of those satellites is?

The XM receiver is receiving two of the XM “audioâ€￾ channels that are each 64kbps for an overall max bandwidth just shy of 128kbps because of encryption and error correction code.
One could argue about their bit budget and which products to distribute, but considering this is full contiguous coverage for the entire USA it is a pretty impressive feat. Baron’s could greatly improve the product just by adding SPC products. Surely watch boxes, MD’s and SWODY’s could not consume much bandwidth.
 
Mike has been making it a point to note that GR indeed does interpolate, not smooth. Regardless, there ARE time when turning on "smoothing" helps you see storm structure more clearly. I wish a had a good example readily available, but there are times when turning on "smoothing" does bring out some storm structure. I used to be pretty critical of GR "smoothing", but I've actually grown to like it. Again, however, note that GRs "smoothing" is more like interpolation, since it's not smearing the data. LOL Ask Mike, I think he posted something about it over in the GR forums.
[/b]

I read the Grlevelx forum note on this topic and they state you really have more accuracy with interpolation 'smoothing' turned on. Instead of smoothing and throwing away data it is actually using a somewhat sophisticated algorithm to determine how to average / draw the bits. Without it supposedly a lot is lost just looking at buckets alone.
 
There is another thread, on the forum, about this company.

http://www.raysat.com/Shopping/CategoryInf...?CategoryID=191

Hopefully high-speed internet is about to be available for a reasonable price. I will be anxious to hear what their monthly fees are going to be set at.


And from Mike on smoothing

I was pointed to a thread about smoothing on stormtrack.org tonight. There are a couple of points I've written about before but want to reiterate again:

1) If you want to see what the Nexrad is reporting then you should use the unsmoothed display. However, the unsmoothed display, aka. point filtering, is always the worst reconstruction of reality.

2) If you want to see the best reconstruction of reality then you should use smoothing. The smoothed display is always a better reconstruction of reality than the unsmoothed display.

These are mathematically provable facts.

Unsmoothed displays suffer from the highest amount of aliasing. For example, an unsmoothed bin of 60dbz will show as a 1km long area of purple, regardless of the dbz's in the surrounding bins. If the surrounding bins were near 60dbz, that would be fine. However, if the surrounding bins were 40 dbz then a more accurate reconstruction of reality would be for the purple area to be *much* smaller than 1km. Smoothing accomplishes this.

GRS currently uses bilinear filtering, which is only one step above point filtering in terms of quality. Bilinear filtering does not "blur" the data in any way.

One final clarification: I should have used "interpolation" from the beginning instead of "smoothing". In technical terms, GRS apps do not "smooth" the data, they interpolate between sampled data values. In future apps, this distinction will become more apparent as we go from simple bilinear filtering to higher-order filtering.


Here's a page showing some examples of different interpolation techniques in action:

http://photoenlargement.imagener.com/

where "nearest neighbor" is the same as unsmoothed. Note that the biggest bang-for-your-buck comes from bilinear interpolation. Bicubic interpolation adds more fine details but its main purpose is to reduce the bilinear interpolation artifacts. Bilinear artifacts are those prism-like distortions on curved areas which make them jagged. You can see them on the top of the cat's right eye in the middle image. These are due to high frequencies in the spatial data interacting with the bilinear transform.

And a final note about velocity. Velocity displays are difficult because you're trying to display a vector, a linear magnitude with a point-sampled direction, with single dot of color. Typically, velocity color tables do this by displaying the direction as one of two fully saturated hues (red and green) with the magnitude as the lightness of the hue.

GRS attempts to smooth the velocity by point interpolating the direction and linearly interpolating in magnitude. This was only partially successful. Higher order filtering on a standard color may be more successful.

Another approach would be to combine velocity with reflectivity into single 3d display. Velocity would be a height field of positive and negative values with reflectivity as its texture. Of course, background maps and other things would no longer work properly.

end quote
 
For example, an unsmoothed bin of 60dbz will show as a 1km long area of purple, regardless of the dbz's in the surrounding bins. If the surrounding bins were near 60dbz, that would be fine. However, if the surrounding bins were 40 dbz then a more accurate reconstruction of reality would be for the purple area to be *much* smaller than 1km. Smoothing accomplishes this.[/b]
This is not exactly true (I hope Mike is reading this)...

The reflectivity bins represent an average reflectivity in that volume. There are really two averages. The average of all reflectors across that sample volume, and the average of all the averages of the samples within that volume (perhaps 32 or 64 samples). Furthermore, the actual volume being sampled isn't exactly 1 degree in diameter (it's a 3D "cone"). the beamwidth represents the half power distance from the center of the beam. The actual volume being sampled is much wider than 1 degree, although beyond 1 degree, the power drops off exponentially and the contribution to the average is less.

That being said, a sample volume showing 60 dBZ before interpolation should, in reality, end up with some values larger than 60 dbz, and some values smaller than 60 dbz after "interpolation" if the effect is to simulate reality as close to possible (like in the cat picture example). The best way to do this is assume that the peak refelctivity is in the very center of that sample, but in reality, that is not always the case. Where this would mostly have issues are for sample volumes representing maximum and minumum values (the former being important if you want to assess peak storm intensity). Median filters will always reduce maxima and increase minima. Dilation filters increase maxima, and erosion filters decrease minima.

greg
 
As for Raysat I looked it up on the net last night, and it said it wouldn't fit on smaller passenger cars and it is supposed to cost about $3900. A little too pricey for me. Think I'll wait for Wimax to get established nationwide.
 
This is not exactly true (I hope Mike is reading this)...

The reflectivity bins represent an average reflectivity in that volume. There are really two averages. The average of all reflectors across that sample volume, and the average of all the averages of the samples within that volume (perhaps 32 or 64 samples). Furthermore, the actual volume being sampled isn't exactly 1 degree in diameter (it's a 3D "cone"). the beamwidth represents the half power distance from the center of the beam. The actual volume being sampled is much wider than 1 degree, although beyond 1 degree, the power drops off exponentially and the contribution to the average is less.

That being said, a sample volume showing 60 dBZ before interpolation should, in reality, end up with some values larger than 60 dbz, and some values smaller than 60 dbz after "interpolation" if the effect is to simulate reality as close to possible (like in the cat picture example). The best way to do this is assume that the peak refelctivity is in the very center of that sample, but in reality, that is not always the case. Where this would mostly have issues are for sample volumes representing maximum and minumum values (the former being important if you want to assess peak storm intensity). Median filters will always reduce maxima and increase minima. Dilation filters increase maxima, and erosion filters decrease minima.

greg
[/b]

This is a response from Mike on the above (he tried to post but for some reason could not)


--------------------------------------------------------------------------------

I tried to respond to this post by Greg Stumpf on StormTrack.com but it wouldn't let me, so I'll reply here. Link to original thread:

http://www.stormtrack.org/forum/index.php?...pic=10021&st=20

Quote:
Quote:
For example, an unsmoothed bin of 60dbz will show as a 1km long area of purple, regardless of the dbz's in the surrounding bins. If the surrounding bins were near 60dbz, that would be fine. However, if the surrounding bins were 40 dbz then a more accurate reconstruction of reality would be for the purple area to be *much* smaller than 1km. Smoothing accomplishes this.


This is not exactly true (I hope Mike is reading this)...

The reflectivity bins represent an average reflectivity in that volume. There are really two averages. The average of all reflectors across that sample volume, and the average of all the averages of the samples within that volume (perhaps 32 or 64 samples). Furthermore, the actual volume being sampled isn't exactly 1 degree in diameter (it's a 3D "cone"). the beamwidth represents the half power distance from the center of the beam. The actual volume being sampled is much wider than 1 degree, although beyond 1 degree, the power drops off exponentially and the contribution to the average is less.


The best theoretical reconstruction technique would be to take the transfer function that the sampling beam performs, invert it, then use that when displaying the nexrad data. The main question would then be how that inverse transform would actually look on screen. My guess is that the kernel for the inverse transform would contain high frequencies and they would produce ugly ringing in the output. Why? Because information is irretrievably lost in the beam sampling/averaging.

I've run the radar data through several types of filters. Bilinear gives the best bang-for-the-buck and is trivial to implement in hardware. Certain interpolating bicubic splines produce somewhat better looking output but they require fairly complex pixel shaders and higher end hardware (ie. TV land). GRS will explore this area in future high end apps.


Quote:
That being said, a sample volume showing 60 dBZ before interpolation should, in reality, end up with some values larger than 60 dbz, and some values smaller than 60 dbz after "interpolation" if the effect is to simulate reality as close to possible (like in the cat picture example). The best way to do this is assume that the peak refelctivity is in the very center of that sample, but in reality, that is not always the case. Where this would mostly have issues are for sample volumes representing maximum and minumum values (the former being important if you want to assess peak storm intensity). Median filters will always reduce maxima and increase minima. Dilation filters increase maxima, and erosion filters decrease minima.

GRS apps assign the peak reflectivity to the beam center then bilinearly interpolate to the neighbors. Having GRS apps amplify the peak above the value in the radar data would require a complex filter kernel; one that maintains the DC component (so that a uniform area of, say, 40 dbz stays 40 dbz). Once again, we're back to a kernel with negative sidelobes and that can introduce ugly ringing in the output. The other filters mentioned (median, dilation, etc.) mangle the data and GRS will not use them for radar data display.

GRS's smoothing is certainly not perfect but it is far better at reconstructing the physical phenomena than point sampling (rectangular bins). Of this, there is no doubt.

Mike
 
Back
Top