MPAS 2016 - Model for Prediction Across Scales

Joined
Jul 6, 2011
Messages
99
Location
Nashville, TN | Norman, OK
I know there was a thread started on this last year, but given some new configurations and results from last year, I figured this would serve well as a new thread.

Some of you might remember from last year that there was an experimental model being run as part of the National Severe Storms Laboratory (NSSL) Hazardous Weather Testbed (HWT) known as MPAS, or the Model for Prediction Across Scales. This model is one of two in the final running to replace the GFS at NCEP (the other is FV3), and is being run once again this year to support the HWT. Below includes 1) the link to the model runs, 2) information about what makes MPAS different, 3) changes from last year. Hopefully this will serve as a reference point for those that decided to use/look at it this Spring.


  • Highlights
Please note: much of this information is borrowed from last year's runs. While I do work with the folks running it, I'm not fully aware of all changes made to this years runs. As I come by new information, I will update these bits of info.
Link: http://www2.mmm.ucar.edu/imagearchive/mpas/images.php
Configurations: http://www2.mmm.ucar.edu/projects/mpas/Projects/MPAS_CONV_2016/
Initialization Times: Once daily at 00 UTC
Initialization Data: GFS
Forecast Hours: 120 (5 days)
Highest Horizontal Resolution: 3km
Lowest Horizontal Resolution: 15km
Vertical Levels: 55
Grid Points: 6.5 million
There are domain zooms for different parts of the U.S.​

  • What is MPAS and why is it different?
Much like the GFS, MPAS is a global model. However, that is where the similarities end. While the GFS works on a spectral grid that defines resolution in wave space, MPAS is a "physical grid" on the sphere, much like limited-area-domain WRF models (NAM, HRRR, the HiRes windows models, etc). MPAS is developed by the same folks at NCAR that developed the WRF-ARW.

The primary feature of MPAS is that the model grid resolution is not static across the whole domain. The grid can be locally refined to a smaller resolution over a particular geographic area, allowing for the model to focus it's computing power on the areas of greater interest. It is believed that this gradual reduction in grid spacing, rather than static nested domains, reduced feedback errors in the model and could lead to overall better model performance. Additionally, the grid cells are hexagonal, with vectors calculated on the edges of the cell. Essentially what this means is that there are more "calculations" per grid cell than on a traditional rectangular grid. Despite higher computational costs, this leads to a higher "effective resolution" for the scale of features that can be resolved.
32d06916b8ba7bf30e6eaec55291ed17.png
b3e653b57c284e6c1b2c03560dfd2dc8.png






  • Initial Results from Spring 2016
One of the goals of MPAS is to see if we can add any predictability to convective forecasts beyond 24-48 hours, which is one of the primary reasons MPAS is run 5 days out. My research involves looking into the convective predictability beyond Day 2, so I figured I would share an interesting case from last year. Keep in mind that some of the predictability may be attributed to the overall synoptic pattern of last May. Part of this year will be running a WRF counterpart and comparing the quality of the forecasts.

The verification compares observed storm reports and updraft helicity forecasts to create a "practically perfect forecast" and a "surrogate severe forecast", using UH greater than a specified threshold as a surrogate for severe weather. It is then gridded and processed in such a way that a "neighborhood probability" of seeing that UH value is calculated, and those are the probabilities plotted. The below image is one case (May 8 2015), comparing the Day 1 through Day 4 forecasts with the observed storm reports. The FSS number, or Fraction Skill Score, is a measure of how well the forecast did. As you can see, the FSS is similar or higher after Day 2 than it is on Day 1, indicating that there may be some good predictability beyond Day 2.
89a13b5fb77f95a35530c333c2af7ba7.png
  • Changes for 2016
The most notable changes in the 2016 runs will (hopefully) be the use of a different PBL physics scheme. Last year the MYNN scheme was used, which has some known biases. A better PBL scheme is being ported over from WRF, but initial testing of the new PBL scheme has not gone well. At this time, I am unsure if they have fixed the issues, or if they have reverted to MYNN.

The microphysics scheme has also been changed from WSM6 (from WRF) to Thompson microphysics for the new set of runs. I can't quite remember why this change is being made, but if I come by the information I will update it here.

Lastly, the CONUS domain is slightly smaller in size but remains the same grid spacing. The grid spacing from outside the CONUS has been reduced from 50km in 2015 to 15km in 2016, and the transition zone of the grid is a little smaller.
2016 Grid
1b4b9818f9b9c5684c1d042337b496aa.jpg
2015 Grid
bb6ec47a97b408113a524734a10bac53.jpg



If anyone has any questions regarding MPAS, it's configurations, or it's forecasts, I'd be more than happy to answer to the best of my ability. I think it's a really exciting new tool, and given how well it performed last year, look forward to seeing how it does this year as well. Please note that if you do go and attempt to see how it did with the events earlier last week, those forecasts were plagued with the bug in the PBL scheme and are not representative of realistic, physical forecasts. I am told, however, they will go back and rerun the data at some point.

Additionally, I will be working on bringing MPAS soundings back to SHARPpy once again. I'll be sure to let everyone know when it's up and going.

Happy forecasting!

Edit: One last thing... Taking a look at the 2m fields on the most recent run, there seems to still be some issues being worked out. Supposedly if things weren't fixed by tomorrow, they were going to revert to MYNN.
 
Last edited:
Edit: One last thing... Taking a look at the 2m fields on the most recent run, there seems to still be some issues being worked out. Supposedly if things weren't fixed by tomorrow, they were going to revert to MYNN.

At least in WRF, 2-m and 10-m fields are diagnosed in the surface driver, not the PBL driver, and so are related to the surface layer scheme and land-surface model rather than the PBL scheme. Perhaps MPAS works differently.

One obvious advantage to using Thompson over WSM6 is that Thompson is double moment for rain and probably cloud ice, and numerous studies have shown double moment schemes depict the structure of supercells and MCSs better than single moment schemes. The obvious disadvantage is the increased computational time. Thompson is a pretty good scheme. So is Morrison. M-Y is the most complex, but it tends to have issues from version to version.
 
Great information. Based on the various presentations and other documentation I've been able to dig up the last couple months it appears NOAA may be close to announcing the GFS replacement. One document I read said the final decision could be made by the end of June. Do you have any insight on whether FV3 or MPAS is the leading contender?
 
At least in WRF, 2-m and 10-m fields are diagnosed in the surface driver, not the PBL driver, and so are related to the surface layer scheme and land-surface model rather than the PBL scheme. Perhaps MPAS works differently.

One obvious advantage to using Thompson over WSM6 is that Thompson is double moment for rain and probably cloud ice, and numerous studies have shown double moment schemes depict the structure of supercells and MCSs better than single moment schemes. The obvious disadvantage is the increased computational time. Thompson is a pretty good scheme. So is Morrison. M-Y is the most complex, but it tends to have issues from version to version.

Thanks for the bit about Thompson microphysics - I had actually forgot that it was double moment, so you're explanation makes sense.

And you're right, the surface scheme is likely the root problem, I don't think MPAS runs any differently. However, I do not actually directly conduct any of the runs, so I may be wrong. I think I incorrectly assumed that the surface layer scheme and PBL were bundles since the surface scheme and PBL scheme were both MYNN last year. I've sent an email to the folks running it to make sure they're aware the issue is persisting though.

Great information. Based on the various presentations and other documentation I've been able to dig up the last couple months it appears NOAA may be close to announcing the GFS replacement. One document I read said the final decision could be made by the end of June. Do you have any insight on whether FV3 or MPAS is the leading contender?

As far as NOAA's selection, I do not have any indication one way of the other as far as I'm aware. I am not involved in anything on that end. However, I do know that MPAS has the support of both the research-to-operations (NSSL) and the academic (i.e. OU) communities due to it's ties with NCAR and the overall configuration of the model core being familiar to already established model cores like WRF. FV3 is being developed by GFDL, however, and may have a little more sway since it's a NOAA funded laboratory. What GFDL has on MPAS is that it's designed to run faster, but it sacrifices the speed for resolution.
 
Last edited:
As far as NOAA's selection, I do not have any indication one way of the other as far as I'm aware. I am not involved in anything on that end. However, I do know that MPAS has the support of both the research-to-operations (NSSL) and the academic (i.e. OU) communities due to it's ties with NCAR and the overall configuration of the model core being familiar to already established model cores like WRF. FV3 is being developed by GFDL, however, and may have a little more sway since it's a NOAA funded laboratory. What GFDL has on MPAS is that it's designed to run faster, but it sacrifices the speed for resolution.

From what I've heard, this decision is insanely political, and probably won't be made in the best interest of the science. I've heard from someone I know who works at NCEP that most of the NOAA heads are favoring FV3 over MPAS, but as you said, basically most or all of the academic/research community seems to favor MPAS. It would make a ton of "sense" if they went with FV3 since NCEP decided to sever most of their remaining connection with the academic and research sector when they abandoned the WRF as the dynamics core of the NAM and instead decided to create their own...the NMMB. As far as I've seen, the NMMB is not statistically better than the WRF, but that doesn't stop NCEP from making sure all of the work done by the research sector falls flat at the feet of NOAA heads and never gets incorporated into operations.
 
From what I've heard, this decision is insanely political, and probably won't be made in the best interest of the science. I've heard from someone I know who works at NCEP that most of the NOAA heads are favoring FV3 over MPAS, but as you said, basically most or all of the academic/research community seems to favor MPAS. It would make a ton of "sense" if they went with FV3 since NCEP decided to sever most of their remaining connection with the academic and research sector when they abandoned the WRF as the dynamics core of the NAM and instead decided to create their own...the NMMB. As far as I've seen, the NMMB is not statistically better than the WRF, but that doesn't stop NCEP from making sure all of the work done by the research sector falls flat at the feet of NOAA heads and never gets incorporated into operations.

Yeah, I'm all too aware of the political issues surrounding this decision. It's unfortunate, because despite being biased towards MPAS due to the fact I work with it, it does seem like it is the better choice of the two model cores. I wasn't all too impressed with the colloquium guest speaker we recently had from GFDL preaching the gospel of FV3.

If MPAS managed to get chosen over FV3, I think it would be a major victory for both the research and operational communities. That said, WRF-ARW managed to maintain relevance after NCEP's decision to develop the NMMB, so at minimum I hope that MPAS can hold on if it isn't chosen.

As far as my understanding goes with the NMMB, they made shortcuts in the model physics to speed it up. From personal conversations with folks at NSSL and SPC, they hardly give any thought to the NMMB WRF convection allowing models due to the unphysical nature of most solutions that come from it. It's unfortunate, because if NCEP repeats history, the operational community will likely end up with a model they don't value forecasts from again.
 
One reason, I would guess, why long range models may prefer a spectral solver is that numerical diffusion is minimized. Any thoughts on how numerical diffusion affects this model compared to others?
 
Very interesting thread. Thanks for sharing!

Are you aware of any studies looking at the error/verification stats of MPAS or FV3 vs NAM/GFS?

I assume that MPAS parametrizes convection on all scales? Or does it try to allow convection explicitly when it moves into the higher resolution segments of the globe?
 
I wonder what ever happened to the FIM model. I thought there was talk about that replacing the GFS at one point.The FIM model has nearly an identical grid (icosahedral horizontal grid) to MPAS too.
 
MClarkson, I heard today that MPAS has scale-dependent parameterizations. At the 3 km resolution that is being run with this version of MPAS, it would be parameterizing convection only at the larger scales that are not over the US. However, other parameterizations schemes may be playing a role over the US. I'm not involved with the project, but rather just repeating what I heard.
 
The FIM (or more accurately it's nonhydrostatic variant the NIM) was a candidate for the GFS replacement. There's a matrix out there somewhere that you could use to infer that the NIM ranked 3rd behind FV3 and MPAS among the candidates. Cliff Mass' blog claims the ESRL group (developers of the FIM/NIM) would prefer MPAS to be chosen.
 
Very interesting thread. Thanks for sharing!

Are you aware of any studies looking at the error/verification stats of MPAS or FV3 vs NAM/GFS?

I assume that MPAS parametrizes convection on all scales? Or does it try to allow convection explicitly when it moves into the higher resolution segments of the globe?

MPAS has scale-aware parameterization when it comes to convection by using a clever weighting function that multiplies the parameterization by 0 when at scales <= to 4km. Then there's a transition region in the "convective gray area" where it's part explicit, part parameterized, and then goes to fully parameterized. This was their means of solving that problem - pretty neat if I may say so. There was a talk about it here at OU when Bill Skamarock came for a colloquium last May, and he talked about it. I'll see if I can dig anything up online though. All the other usual parameterizations apply the same as they would any WRF model. Granted, I'm pulling this all from memory, so there might be a missing piece or two, but this is at least how I remember it being explained to me.

I can't speak to anything regarding FV3 as I am not in the loop with that camp.

As far as your verification question, the answer right now is no. Technically speaking, that's what I'm doing. I've looked at a few basic error metrics comparing it to the GFS, but verification itself is a hard problem. What field do you verify? What about a forecast is most valuable to you? What constitutes as valuable? What even constitutes as "good" or "useful"? These aren't really trivial questions and the answer to most of them for me is "I don't know". Common metrics are things like 500mb height anomaly correlation, but I'm of the mindset that there's a lot more to a skillful forecast than 500mb heights. Still, I tried to do some of these "basic" metrics but never really found anything particularly useful or conclusive.

Because my work is being done with the help and permission of NSSL, and one of the key features of the model is the convective scale, we've focused more on doing verification on the convective scale forecasts. The biggest question is actually figuring out whether or not we're even getting anything useful beyond Day 2 at those scales. Because the GFS and NAM are parameterized with respect to convection, there is no direct comparison. The closest comparison I can think of would be verifying the QPFs from each model - which might be worth looking into. As far as the convective forecasts, tonight I just finished up a comparison of the convective forecasts from MPAS to those of the NCAR ensemble. I'll have to hold off on posting those until I've had a chance to show them to and discuss them with my advisors, but after that's done should be able to share them here.

I wonder what ever happened to the FIM model. I thought there was talk about that replacing the GFS at one point.The FIM model has nearly an identical grid (icosahedral horizontal grid) to MPAS too.

If I remember correctly, FIM was a contender (one of 6, I believe?), but fell out of the running some time ago. The final two are MPAS and FV3. Honestly, however, I always felt amazed that FIM even worked. I remember seeing a few presentations on it and was frankly amazed they were even able to get reasonable forecasts out of it. While it's horizontal coordinate was similar, they were doing some strange things in the vertical. I think it was a hybrid ETA/Isentropic height grid if I remember.
 
Last edited:
True, verification stats are often quite user dependent, and QPF can be pretty hard to pin down because it is by its very nature erratic especially in convective regimes. Ive always considered the surface wind, temperature, and dew point to be easiest and statistically most significant fields to verify... easily quantifiable, fairly smooth across space and time, with a very large and accurate observation database.

Of course, the best model for one variable or geographic region is often not the best somewhere else. I was just wondering what kind of work had been done and if any blatant trends had been noticed.

Good luck with your research!
 
I was quite surprised when EMC went the NMMB route and left WRF out of the loop. I'm just not really sure what is going on with EMC lately... It seems like they are detaching themselves from the research community which is harmful, IMHO. I've read nothing positive with NMMB vs. anything else out there which I think is very telling. Great discussion everyone, thank you! I've been impressed with the MPAS and use it often, especially with the spring experiment going on.
 
Back
Top