Adding Value

Today the CI (convective initiation) forecast team opted to forecast for northern Nebraska, much of South Dakota, southern and eastern North Dakota, and far west-central Minnesota for the 3-hr window of 21-00 UTC. The general setup was one with an anomalously deep trough ejecting northeast over the intermountain West. Low-level moisture was not all that particularly deep as a strong, blocking ridge had persisted over the southern and eastern United States for much of the past week. With that said, the strength of the ascent associated with the ejecting trough, the presence of a deepening surface low, and a strengthening surface front was such that most numerical models insisted that precipitation would break out across the CI forecast domain. The $64,000 question was, “Where?”.

One model in particular, the High-Resolution Rapid Refresh (HRRR) model insisted that robust storm development would occur across central and north-eastern South Dakota during the late afternoon hours. It just so happened that this CI episode fell in outside the CI team’s forecast of a “Moderate Risk” of convective initiation. As the CI forecast team poured over more forecast information than any single individual could possibly retain, we could not make sense of the how or why the HRRR model was producing precipitation where it was. The environment would (should?) be characterized by decreasing low-level convergence as the low-level wind fields responded to the strengthening surface low to the west. Furthermore, the surface front (and other boundaries) were well removed from the area. Still, several runs of the HRRR insisted storms would develop there.

It’s situations like this where humans can still improve upon storm-scale numerical models. By monitoring observations, and using the most powerful computers in existence (our brains), humans can add value to numerical forecasts. Knowing when to go against a model, or knowing when it is important to worry about the nitty-gritty details of a forecast, are important traits that good forecasters have to have. Numerical forecasts are rapidly approaching the point where on a day-to-day basis, humans are hard pressed to beat them. And, in my opinion, forecasters should not be spending much time trying to determine if the models are wrong by 1 degree Fahrenheit for afternoon high temperatures in the middle of summer in Oklahoma. Even if the human is correct and improves the forecast, was there much value added? Contrast this with a forecaster improving the forecast by 1F when dealing with temperatures around 31-33F and precipitation forecast. In this case the human can add a lot of value to the forecast. Great forecasters know when to to accept numerical guidance, and when there is an opportunity to improve upon it (and then actually improve it). Today, that’s just what the CI forecast team did. The HRRR was wrong in it’s depiction of thunderstorms developing in northeast South Dakota by 00 UTC (7 PM CDT), and the humans were right…

…and as I write this post at 9:30 PM CDT, a lone supercell moves slowly eastward across northeastern South Dakota. Maybe the HRRR wasn’t as wrong as I thought…

Tags: None