Last week we had the biggest snowstorm in the area in at least the past 10 years – but how did technology play a role in forecasting this event, and what’s with these “models” that the newscasters kept talking about?
Weather models – formally known as Numerical Weather Prediction – are simply simulations of the atmosphere – which, in effect, causes the weather. Each simulation takes into account known facts about the atmosphere – mainly equations about how fluids in the atmosphere work – as well as the initial conditions of the atmosphere. Meteorology, like any field of science, is in constant development. No simulation is perfect which causes errors in the forecast. Errors can be caused by two primary methods: computing ability and initial conditions.
The set of equations used in weather models are all partial differential equations. This indicates two things. Firstly, the result of the model run is highly dependent on the initial observations (pressure, wind speed, direction, temperature, etc.). One small error in observational data can throw the entire system into whack. In addition, computers can only compute basic operations such as addition and subtraction. They cannot directly solve a differential equation, only approximate the answer. As a result, the “solution” given by the weather model is never exact. At point, it will cut off decimal places. They do not have the computing ability to determine all components of the atmosphere completely.
One way a model deals with this is limiting its resolution. Instead of calculating the atmosphere, it will calculate it in a grid pattern. For example, a model might only calculate the atmospheric conditions at a point every 20 kilometers and at every kilometer vertically. By reducing the number of points in which the model has to calculate, it can create a simulation of the atmosphere at a much quicker speed. Obviously, this leads to inaccuracies in the model, as it has to estimate geographic features and data about the fluids at the points in between. One issue arises at coastlines. The atmosphere behaves very differently over land versus over water. When the boundary falls in between points, the simulation becomes less accurate. The accuracy can be improved make the distance between the points smaller, but that also comes at the trade-off of increasing computing time.
Among weather models, there are two weather models that are frequently mentioned in the news: the GFS (Global Forecasting Model) and the Euro (European Center for Medium Range Weather Forecasting/ECMWF). These models are the two most renown weather models. Each one has slight variations due to the calculations used, equations, and how data is interpreted. However; some meteorologists say that they prefer the European model as it is more accurate. While models alone should never be used to forecast, the European model does have a better track record than the GFS (for example, see Hurricane Sandy, Joaquin, among countless other examples). How does the Euro succeed? The GFS is a product of NOAA (National Oceanic and Atmospheric Administration) – the organization does not focus primarily on creating a weather model. However, the European model is created by an organization whose only objective is creating the best weather model. It does not have to diversify. Additionally, the European model runs on a stronger supercomputer than the GFS.
What’s next for weather models? On the technology end, the improvement is mainly going to come from the growth of computers. Numerical Weather Prediction is a computationally heavy field: its success will only increase from the growth of supercomputers. On the science end, it simply needs more research to understand how the atmosphere works, just like any field of science.
As a final reminder: models are only one component used by a skilled meteorologist. General knowledge of the atmosphere and weather patterns also play into account when giving a forecast. And in the end, nothing beats reading your weather rock.
Be First to Comment