Pretty much the only thing I think AI could be useful for - forecasting the weather based off tracking massive amounts of data. I look forward to seeing how this particular field of study is improved.
Bonus points, AI weather modeling, for once, saves energy relative to physics models. Pair it with some sort of light weight physical model to keep the hallucinations at bay, and you’ve got a good combo.
As much data as they have, it seems like they could use more data. Just as an example, I have two weather apps on my phone. And for the same city, they will give me two different temperatures. Checked back to back. And those temperatures will both be different than the temperature of my thermometer at my house. What if each city had say like 50 sensors all over the city that would report in and then they would take the average of all 50 of those sensors in order to get a more accurate number? And that’s just for temperature.
Your two apps are probably reporting forecast data from two different models.
You’re asking the obvious question, but it’s not quite that simple.
The temperature at different points in your city can be widely different. Averaging 50 sensors would eliminate that difference and give a single number for the whole city. That would be a loss of accuracy. What you really need are forecasts at multiple points in your city, based on the weighted average of those 50 sensors, also taking into account other factors such as altitude, insolation, type of buildings or vegetation, albedo and I forget what else. For example, if your forecast point is on a road, tarmac retains heat differently than grass does, so snow will melt faster on a road than on a field. For another example, buildings often emit heat in the winter, and that will impact how much snow accumulates at a given point. Look up the urban heat island effect for more on this.