More Data Visualization

As jimmyc touched on in his last post, one of the struggles facing the Hazardous Weather Testbed is how to visualize the incredibly large datasets that are being generated. With well over 60 model runs available to HWT Experimental Forecast Program participants, the ability to synthesize large volumes of data very quickly is a must. Historically we have utilized a meteorological visualization package known as NAWIPS, which is the same software that the Storm Prediction Center uses for their operations. Unfortunately, NAWIPS was not designed with the idea it would be handling the large datasets that are currently being generated.

To help mitigate this, we utilized the Internet as much as possible. One webpage that I put together is a highly dynamical, CI forecast and observations webpage. This webpage allowed users to create 3, 4, 6, or 9 panel plots, with CI probabilities of any of 28 ensemble members, NSSL-WRF, or observations. Furthermore, users had the ability to overlay the raw CI points from any of the ensemble members, NSSL-WRF, or observations to see how the points contributed to the underlying probabilities. We even enabled it so that users could overlay the human forecasts to see how it compared to any of the numerical guidance or observations. This webpage turned out to be a huge hit with visitors, not only because it allowed for quick visualization of a large amount of data, but because it also allowed visitors to interrogate the ensemble from anywhere — not just in the HWT.

One of the things we could do with this website is evaluate the performance of individual members of the ensemble. We could also evaluate how varying the PBL schemes affected the probabilities of CI. Again, the website is a great way to sift through a large amount of data in a relatively short amount of time.

Adding Value

Today the CI (convective initiation) forecast team opted to forecast for northern Nebraska, much of South Dakota, southern and eastern North Dakota, and far west-central Minnesota for the 3-hr window of 21-00 UTC. The general setup was one with an anomalously deep trough ejecting northeast over the intermountain West. Low-level moisture was not all that particularly deep as a strong, blocking ridge had persisted over the southern and eastern United States for much of the past week. With that said, the strength of the ascent associated with the ejecting trough, the presence of a deepening surface low, and a strengthening surface front was such that most numerical models insisted that precipitation would break out across the CI forecast domain. The $64,000 question was, “Where?”.

One model in particular, the High-Resolution Rapid Refresh (HRRR) model insisted that robust storm development would occur across central and north-eastern South Dakota during the late afternoon hours. It just so happened that this CI episode fell in outside the CI team’s forecast of a “Moderate Risk” of convective initiation. As the CI forecast team poured over more forecast information than any single individual could possibly retain, we could not make sense of the how or why the HRRR model was producing precipitation where it was. The environment would (should?) be characterized by decreasing low-level convergence as the low-level wind fields responded to the strengthening surface low to the west. Furthermore, the surface front (and other boundaries) were well removed from the area. Still, several runs of the HRRR insisted storms would develop there.

It’s situations like this where humans can still improve upon storm-scale numerical models. By monitoring observations, and using the most powerful computers in existence (our brains), humans can add value to numerical forecasts. Knowing when to go against a model, or knowing when it is important to worry about the nitty-gritty details of a forecast, are important traits that good forecasters have to have. Numerical forecasts are rapidly approaching the point where on a day-to-day basis, humans are hard pressed to beat them. And, in my opinion, forecasters should not be spending much time trying to determine if the models are wrong by 1 degree Fahrenheit for afternoon high temperatures in the middle of summer in Oklahoma. Even if the human is correct and improves the forecast, was there much value added? Contrast this with a forecaster improving the forecast by 1F when dealing with temperatures around 31-33F and precipitation forecast. In this case the human can add a lot of value to the forecast. Great forecasters know when to to accept numerical guidance, and when there is an opportunity to improve upon it (and then actually improve it). Today, that’s just what the CI forecast team did. The HRRR was wrong in it’s depiction of thunderstorms developing in northeast South Dakota by 00 UTC (7 PM CDT), and the humans were right…

…and as I write this post at 9:30 PM CDT, a lone supercell moves slowly eastward across northeastern South Dakota. Maybe the HRRR wasn’t as wrong as I thought…

Tags: None