Last month, we introduced you to ForecastWatch client Brent Shaw, who serves as vice president of software engineering and content for Iteris – the global leader in applied informatics for transportation and agriculture, specializing in utilizing big data analytics to solve a variety of complex and wide-ranging issues.
Brent describes himself as a lifelong “weather weenie”, who owned his own weather forecasting kit as a kid and whose life path was likely solidified as a third grader when a tornado destroyed his Pleasant Hill, Missouri elementary school – while he was in it. His love of weather carried him through college and into the U.S. Air Force as a weather officer, and finally into the private sector doing applied research and development, where he solves problems with one foot in operations and the other in research. Through it all, Brent combined his love of weather with his interest in computers and programming, working with numerical weather models and all forms of raw observational data to make weather forecasts more accurate and relevant. By transforming weather data into real decision guidance for operations that are sensitive to weather and ground conditions, Brent’s team at Iteris offers clients in the agriculture and transportation industries more pragmatic solutions rather than just being another provider of weather data.
This month, we continue our conversation with Brent, who shares his insights on trends in forecast accuracy and machine learning.
Do you predict big leaps in the future regarding accuracy of weather forecasting?
If we’re talking about what’s going to make weather forecasting more accurate, I always say we’re on the asymptotic curve right now – where it takes a lot of effort to realize incremental improvements. In that context, you have to be much more careful to understand the ROI of your R&D efforts so you can properly prioritize where the most effective work can be done to advance our capabilities.
We know there are limits to the predictability, but we don’t know where those limits are. It feels like we’re approaching those limits with respect to forecasting specific conditions on specific days into the future, but you never want to say never. Studies have shown that weather forecasts get more accurate by one more day in the future for every 10 years. We’re probably even progressing a little faster than that now.
That being said, in terms of forecast accuracy on the 10-day forecast, the near-term forecasts are already pretty accurate, and ongoing work continues to incrementally improve them. But, it’s hard to envision a silver bullet that’s going to revolutionize the process to a point of having perfect forecasts or being able to predict specific conditions for specific times at a location beyond a couple of weeks.
What are your thoughts regarding the increasing number of weather sensors and the push for GPU computers?
There’s a lot of hay being made about more sensors and higher resolution, and GPU technology certainly helps with the latter. But the highest-resolution forecast, given the state of the models, doesn’t necessarily produce the most accurate forecast…. It’s all very dependent on the problem you’re trying to solve.
In terms of sensors and the Internet of Things – meteorologists want as much data as we can get, but where we really need sensing isn’t at the ground level. More and more accurate sensing of the 3D volume of the atmosphere is what’s required. Also, more is not always better. More data that is subject to more error can really have a detrimental effect. We really need higher quality data that spans more depth of the atmosphere, plus good ground truth information that actually measures what we are trying to predict at the ground level.
There’s a lot of conversation about capturing data pressure sensors from phones – in some cases that may help and in some cases that may hurt. It’s also an example of where we may have a lot of data, but it is of questionable quality and really isn’t a parameter that means much to anybody, even though it is a fundamental term in our atmospheric equations. Where it helps, it’s incremental – but it’s not revolutionary. I don’t believe it’s going to revolutionize forecast accuracy, because even if we have perfect observations everywhere, the models themselves are still subject to fundamental errors that may be larger than the improvements. So, you still have to understand some fundamentals of the atmosphere to improve the models, and you need quality over quantity of data.
So where do you see the “next big thing” in forecasting?
Our understanding of atmospheric processes continues to grow and computers are getting faster. There are more open-source libraries to help people do machine learning techniques faster. But where those help isn’t necessarily making the weather forecast itself more accurate. We need to make the information more valuable.
For example, we can match historic weather and soil data with a particular crop location, variety, yield and management over several years to develop better models that turn the predicted weather data into a more relevant prediction of what a farmer or organization really needs to know. We want to answer questions like, “What is the best time in the next few weeks to apply this product?” or “when and how much should I irrigate?” This is real predictive weather analytics: getting to the actual answer that is needed without having to think about the weather directly. We have a long history of doing this for helping states manage their snow and ice treatment operations, and we have moved that same concept into agriculture.
I do think there’s a relatively large space to apply machine learning and artificial intelligence. But I don’t see the primary benefit of this being applied to making our base weather forecasts more accurate. Certainly there will be techniques that incrementally improve forecasts over time and we may accelerate that over time – one day every 10 years perhaps – but I don’t see that being the revolution in that suddenly we’re going to have forecasts that are orders of magnitude more accurate than what we have today sometime in the next five to 10 years.
Where I think we are really starting to leverage machine learning is in the application of weather forecasts in ways that we haven’t done before to get to the actual answers needed, so that a person is not mentally extrapolating how the forecast weather impacts the decisions they need to make. We’ll see more machine-to-machine advancements. Not just sensors coming into our system, but forecast algorithms that take the weather and soil data, and create an answer that’s communicated to a machine in the field. We won’t have to have somebody turn the irrigation system on and off – it can upload a prescription for variable rate irrigation. There’s always going to be a role for a human, but the roles will change. There’s no perfect predictive system. Weather is actually one of the great success stories in predictive systems. When it comes to day-in, day-out weather forecasting for the next 10 days, it’s all governed pretty much by physics and dynamics that are well understood and coded into solvable equations. There’s nothing that a human can do that’s going to substantially affect that 10-day forecast.
As you start trusting the forecast and building tools to know when to trust it in an objective sense (with ensemble prediction and probabilistic tools), you can drive automated decisions that are going to win more than they lose.
I think there’s a fear in the industry, especially in the agriculture sector, that we want to automate everything and take agronomists out – no, absolutely not. What we want to do is let the agronomists use their brainpower on things that they haven’t had time to think about in the past. We are looking to empower them to have even greater impact on feeding and clothing our world. Our improved understanding of the atmosphere and ground interactions combined with more powerful computers have made our forecasts accurate enough for real, valuable decision making when properly paired with the right, multi-disciplinary expertise.