Boris Konon
EF4
See attachment.
It is said that for ML models, the equations of motion and thermodynamics are not satisfied, so that is going to lead to unrealistic output and egregious errors at times. Not that physics-based models are immune to these same problems, but we often know why they occur due to the limitations of simulating the atmosphere best, as one example. For ML models, does that same apply? That is, would we be able to detect why they are wrong for a given forecast?
How far do ML models not satisfying equations go? For instance, Navier-Stokes?
The paper states:
"New forecast methods based on statistical estimation, including neural networks
and ML, are neither designed nor constrained to yield dynamically coherent,
physically consistent evolutions of the atmosphere."
The above would appear to be a huge problem.
The paper also discusses ML models lacking in resolution detail, and give a more broad overview. That's fine within itself, but modeling has come so far, getting the broad strokes right as to what is going on, say synoptically, is no longer an issue. What matters most now are the details and fine-tuning on a more local level and shorter time frames in weather forecasting. That's what the public/partners want and demand.
How vulnerable are ML models to chaos theory? More or less than physics-based?
Another item suggested in the paper, it seems that ML models can not exist or do well/improve without physics-based models. This would ask the question, how much of a statistical database of weather history/analogs do we need for an ML model to perform better?
How far do ML models not satisfying equations go? For instance, Navier-Stokes?
The paper states:
"New forecast methods based on statistical estimation, including neural networks
and ML, are neither designed nor constrained to yield dynamically coherent,
physically consistent evolutions of the atmosphere."
The above would appear to be a huge problem.
The paper also discusses ML models lacking in resolution detail, and give a more broad overview. That's fine within itself, but modeling has come so far, getting the broad strokes right as to what is going on, say synoptically, is no longer an issue. What matters most now are the details and fine-tuning on a more local level and shorter time frames in weather forecasting. That's what the public/partners want and demand.
How vulnerable are ML models to chaos theory? More or less than physics-based?
Another item suggested in the paper, it seems that ML models can not exist or do well/improve without physics-based models. This would ask the question, how much of a statistical database of weather history/analogs do we need for an ML model to perform better?