We have amazing clients and we’re happy when we get the chance to introduce one of them to you. Recently, we spoke with Iain Russell, a career meteorologist since 1987 who serves as the Director of Meteorological Research and Development for The Weather Network – Canada’s most popular weather and information resource, which delivers its content via TV, web and smartphone apps.
Iain heads a small, but amazingly industrious and innovative team that manages to find new ways to use ForecastWatch products that we thought you might find interesting. Iain has also found ways to help us push the boundaries of ForecastWatch. We love a good partnership!
How Did You Come to Use ForecastWatch Online?
We typically do our own forecasting for the most part, but we also wanted to look for quality comparisons to ensure our accuracy and see how our forecasts compared to other providers in the marketplace. We thought about collecting other forecasts on our own for the comparisons, but when we sized up the job, we decided the effort and the investment wasn’t worth the benefit. We know from experience how much will have to be poured into infrastructure for a verification system like ForecastWatch Online. Using ForecastWatch has saved us from having to build out an entire area and investing in development resources.
How Have Your Needs Changed Since You First Started Working with ForecastWatch?
We began just tracking weather within Canada, but then moved to a larger global footprint. We worked with ForecastWatch to help expand the product for a more international scope. I think we may have been the first client to push them in a more international direction.
We wanted observations for any location on earth, so we built a system to do that using a multi-blend of various data sets including computer forecast models to radar and satellites to lightning data sets and standard surfaces observations using interpolation techniques. Now we can generate data for anywhere we want, partly because that’s what our customers want, but also because we want to calibrate our forecasts.
The other piece we do is how to automatically improve the quality of weather forecasts. We use machine learning type approaches using standard linear regression algorithms. We do that for as many observational locations as we have in order to support our forecasting activities.
When we first got the international forecast data from the ForecastWatch website, we noticed right away that we had some issues with accuracy. And that was really good because, I mean you don’t ever want to have inaccurate forecasts, but without ForecastWatch’s diagnostics on the website, we wouldn’t have had a really good understanding that improvements could be made. And that made us realize we needed to do something different.
What Did You Decide to Do?
At that time, we decided to build this system called a bias correction and blending system which calibrates our forecasts. And the heart of our forecast is really a consensus blend. We have data coming from a handful of different weather prediction models. We split them up into different data types, so there might be 20-30 different inputs into our forecast. We needed a way to merge them all together and that’s what our bias correction blending system does for us.
Once we did that, we actually were able to track performance over time using ForecastWatch. When we look at the monthly results we get on ForecastWatch Online, we could see how significantly improved our forecasts were. That would have been hard to do without ForecastWatch. We would have built the technology without ForecastWatch, but to actually be able to evaluate whether the technology had a positive impact on our data would have been difficult without the diagnostics from the ForecastWatch website. So that’s been a really good tie-in for us.
Thanks, Iain! Keep up the good work.