EWP Week 1 Summary (May 4-7, 2015)

During the first week of the 2015 EWP Spring Experiment, we had forecasters from the Newport, Columbia, Roanoke, Marquette, and San Diego WFO’s, as well as a broadcast meteorologist from KETV Omaha, NE participate in the Spring Warning Project. With most of the active weather confined to the Southern Plains throughout the week, our CWA’s of operations included: Topeka, Omaha, Wichita, Midland, Lubbock, Amarillo, Albuquerque, San Angelo, Norman and Hastings. We had a good mix of marginal and very busy severe weather days, with Wednesday being the most active weather day as severe weather broke out across central Oklahoma.

Participants were able to use all of the demonstration products this week, which included GOES-R and ENI total lightning products. There were many good blog posts written throughout the week highlighting the use of all of these products in various forecast/nowcast/warning situations. Below is some end-of-the-week feedback on each product from this weeks participants:

PGLM

– It was useful yesterday in the Lubbock CWA.
– I have a marine responsibility in my CWA, so the lightning data would be useful for issuing sub-severe forecast products for storms moving towards the coast.
– This lightning information would be useful in my CWA on days when the fire danger is heightened.

Lightning Jump

– I was not so sure if a 1 or 2 sigma jump was significant, or if I should wait for a 5 or 6 jump.
– At one point when things were active, I was ignoring the 1 sigma jumps. The 4 and 5 sigma jumps really drew my attention.
– 3-sigma was when I really started paying attention to the storm.
– 1-sigma is probably not even worth a color, I made it transparent.

ProbSevere

– It did a good job with discrete cells, but when the mode became more linear, it suffered
– It really drew my attentions to the storms that I should interrogate further
– For me, MESH in the readout does not need to read to hundredths of an inch.
– Color/highlight values in the text display as they become more significant. Make the actual probability stand out more too.
– For the color contour, I made it neon on the higher end, and got rid of the lower end. The low end looked almost the same as the higher end.

CI

– Overall it did a pretty good job throughout the week of highlighting where CI would occur
– It really works best in a clear environment (no cirrus contamination)
– When you had a cu field developing, it did a nice job of depicting where stuff would go
– I liked using it when we had a lot of boundaries, it did a good job of depicting where along the boundaries stuff would go. Especially today when there were a lot of different boundaries in play, it gave me confidence where convection would develop and where I should look.
– Having a higher threshold for CI would be more relevant in WFO operations, and less messy
– With the colortable, the lowest probs (deep blues) stood out the most, which is not what you want to see. The higher end colors against the light background were more difficult to see. Reduce the appearance of the lower probs make the higher probs stand out more.
– Perhaps you could start out with white to light gray at the low end, and transition to colors at the higher end.

GOES-R LAP
– I use the GOES PW field the most
– I would like to know what data are from the retrievals and what are from the GFS.
– The pixelation in certain areas was an issue, with sharp, unnatural transitions present between adjacent values.

NUCAPS

– It might be a good idea to merge this product with something like LAPS to improve the lower levels.
– I want to set this up as soon as I return to my office.
– I can see myself using this a lot in the winter.
– I esepcially like the observation-based nature of it
– I would like to look at this over Lake Superiror
– Anything that gives us a temperature profile will be helpful, especially in the winter.
– In San Diego, it will benefit us during the summer monsoon. Also, the San Diego RAOB is not representative of the mountains in our CWA
– QC flags would give me more confidence in the soundings, as it is difficult to judge with just the cloud data.
– The RAOBS in my area are not representative of most of my counties, so I often use forecast soundings.

– Bill Line, Week 1 EWP Coordinator and SPC/HWT Satellite Liaison

 

 

Tags: None

Weekly Summary (Week 4)

Summary of Operations

Monday::
Team 1:           Pelcyznski and Anderson (Norman, OK)
Team 2:           Fowle and Satterfield (Wichita, KS)

Tuesday:
Team 1:           Pelcyznski and Anderson (Hastings, NE)
Team 2:           Fowle and Satterfield (North Platte, NE)

Wednesday:
Team 1:           Fowle and Anderson (Louisville, KY; Springfield, MO; Cheyenne, WY)
Team 2:           Pelczynski and Satterfield (Boulder, CO)

Thursday:
Team 1:           Fowle and Anderson (Boulder, CO)
Team 2:           Pelcyznski and Satterfield (Pueblo, CO; Hunstville, AL)

Comments on Experimental Products:

vLAPS

– Forecasters really liked the CAPE analysis; it helped them locate boundaries.

– Forecasters felt that the model didn’t add to their skill, but the analysis did add skill.

– Forecasters would like to see supercell composite and the significant tornado parameter in future versions of vLAPs (as well as mesoanalysis products from SPC).

– Forecasters believe vLAPs “overconvects” less than other models.  Caution other forecasters not to throw out the forecast

– Forecasters like the re-locatable domain, especially on big risk days.  However, the domain wasn’t quite large enough to capture every event (i.e., the 200 x 200 domain is a bit small)

– Forecasters like having a theta-e forecast.

OUNWRF

– Forecasters believe the model is good in a qualitative sense.  However, the first run convected too early; the latter runs caught up with reality, though.

– Forecasters thought the placement of developing convection was good. – first two hours of high-res models not as good; simulated IR cloud brightness.  How is the model doing?

– Forecasters would like to see a time ensemble.

– Forecasters like the model going out to 8 hours, because it allows the model to spin up.

– Forecasters suggest the use of “nudging”  to improve the initialization.

– Forecasters suggest coordination for high-res modeling.  There seems to be some redundancy in the models.

GOES-R

Simulated Satellite

– The model missed convection in one case because it missed the cirrus shield.
– Forecasters like the product to help them with the big picture (e.g,, shortwaves).  It increases their confidence in their forecast.
– That said, forecasters find it hard to put confidence in details. How much do you trust the models?
– Forecasters suggest displaying a combination of SimSat with reflectivity to see what features are associated with the cloud.

– Forecasters feel SimSat is very valuable.

– They would like to see Sim Sat for the HRRR.
– Forecasters think it is an easy way to spot errors in the model.

NearCast
– Forecasters note that precipitable water / theta-e helped to show where CI would occur (i.e., on strong gradients).  They used the visible satellite in combination with those products to see where the boundary would progress.  This worked well on at least one occasion.
– Another forecaster mentioned using the NearCast theta-e product in comparison with vLAPS CAPE.  They used it to spot boundaries / instability.
– Forecasters note that NearCast is good as a qualitative tool (i.e., where should I focus?).
– The NearCast is good to use before convection, but is not as useful after (given that storms have already fired, and so CI is already established).

– One forecaster preferred the theta-e difference product.  She noted that is better than anything at her office.  She also said that is nice to overlay a theta-e image on satellite or radar.  She thinks it’s helpful from a forecasting standpoint, because it shows where CI is most likely.  After convection formed, she didn’t look at it, but it was good for the 3 hrs before CI.  She also mentioned that she prefers the NearCast to the SPC theta-e product (because it’s too noisy)

– NearCast picked up subtle gradients in moisture. In one instance, this corresponded to showers that went up in Colorado.
– One forecaster mentioned that this product could be useful for cold-air damming or sea breezes.

– One forecaster would like to see a change in the color scale.

– One forecaster didn’t see a lot of utility in precipitable water at such high resolution; they tended to focus on theta-e, theta-e difference.  Other forecasters disagreed, however.

– Some forecasters think this is a calibration issue.  That is, they don’t use theta-e difference very often, so not sure what it all means.  Perhaps, instead of theta-e difference, use CAPE, deep moisture convergence, or frontogenesis.  They believe that new algorithms could be helpful.

Convective Initiation
– Forecasters indicate that cloud obscuration – i.e., high cirrus – hindered the product at times.
– Forecasters prefer to look at high values of CI only (strong signals).
– Forecasters would like a quantitative value of growth available (like Cloud Top Cooling), rather than a simple probability.  It would add more value to their interrogation.  (Something to add to the cursor readout, perhaps?)
– Our broadcaster indicated that he could see great value in the CI product for TV.

– One forecaster mentioned that on one day during Week 4, it didn’t fit their conceptual model of how to use the product.
– A forecaster noted that it worked well outside of cirrus shield.  In that case, the CI product was valuable.
– One forecaster mentioned that the output is a little too cluttered – that it confused more than it helped.

ProbSevere
– Forecaster think ProbSevere is a good tool – a very good “safety net”.

– They would like to see a little more calibration on some of the thresholds.  Right now, it seems to them to be a hail tool.
– This tool could be very helpful for broadcasters, who may be working alone.
– Forecasters note that the color curve in the 10-40% range is tough to discern.  It’s good for storms that are developing – but not as good for storms that have already developed.
– One forecaster notes that the colors could be problematic for color blind folks.  They suggest potentially using line thickness as a way to convey probability.

– ProbSevere is good for slowly-developing storms; good for hail; poor for wind.  Should the product be referred to as ProbHail?  It’s not as useful in rapidly growing convection (just verifies warning).  The 6 min lag associated with the product makes it harder to make judgments in the case of quickly developing storms.

– Broadcaster likes it from a broadcast standpoint: it helps a broadcaster to multi-task.

– ProbSevere is good as a confirmation tool or regional situational awareness tool, and it could be helpful for updating warnings.

– The forecasters would like to see ProbSevere separated into hail, wind, and tornado probs

– They can envision a new 4-panel: probability of tornado, wind, hail, and total severe.

– The cursor readout was nice, but one of the forecaster didn’t understand the glaciation rate.

– One forecaster didn’t like the cursor readout.

– Another forecaster liked to see the extra information; he suggests that the cursor readout is a matter of personal preference.

Overshooting Tops
– Forecasters saw overshooting tops on visible satellite before the algorithm picked them up.

– They believe that the temporal resolution is too low.

– Different people have different uses for it.  WFO forecasters like it for the big picture, but won’t interrogate.

– Ever broadcaster would love it – would find it very helpful.

 

PGLM / Lightning Jump

– Biggest winner of the Week 4 products.

– “Everyone’s favorite” – Kathleen Pelczynksi

– The lightning jump algorithm helped tremendously in warning operations. It was very helpful to have 1-min lightning jump updates while waiting for radar volume scans.  These frequent updates certainly impacted warning decisions.

– Forecasters related anecdote where the lightning data helped issue warning early in an explosive environment.

– Broadcasters are very concerned with lightning; if it has lightning, they consider it severe, even without hail.

– One forecaster still not sure about calibration regarding sigma jumps.  Would suggest more in-depth lightning training; many mets don’t understand the dynamics.  How does it work?

Tracking Tool
– Beneficial if it works, but it takes a lot of time to use.

– It would be more valuable if you could use it in one click (not enough time for it otherwise).

– VR shear / time heights tracking might be useful as well.

Overall

– Forecasters don’t feel like 1 pm EFP briefing was helpful : “fighting to stay awake.”  They did not consider it important for what they were doing.

– Forecasters also felt like the briefing was intended for the EFP (they didn’t mention the synoptic scale much).

– Forecasters felt their time would have been better spent looking at AWIPS

– They suggest that we start earlier than 1 pm.

– Regarding training, broadcaster suggests that other broadcasters get a couple hours of AWIPS training.  Broadcaster says it’s good to mix it up with forecasters – found it really valuable.

 -G. Garfield
Week 4 Coordinator

Tags: None

EWP2014 Week 3: Weekly Summary 19 May – 23 May 2014

Project Overview:

This was first week of our four-week spring experiment of the 2014 NSSL-NWS Experimental Warning Program (EWP2014) in the NOAA Hazardous Weather Testbed at the National Weather Center in Norman, OK.  “The Big Experiment” or “Spring Experiment” had three components:  (1) an evaluation of multiple CONUS GOES-R convective applications, including satellite and lightning;  (2) an evaluation of the model performance and forecast utility of two convection-allowing models (the variational Local Analysis Prediction System and the Norman WRF);  (3) and an evaluation of a new feature tracking tool from NASA SPORT.  Additionally we coordinated daily with Experimental Forecast Program, participating in briefings and evaluating the probabilistic severe weather outlooks produced by their forecasters as guidance for our warning operations.

Participants:

Our NWS participants were Joshua Boustead (WFO Omaha, NE), Linda Gilbert (WFO Louisville, KY), Grant Hicks (WFO Glasgow, MT). Our visiting broadcast meteorologist for the week was Danielle Vollmar of WCVB-TV (Boston, MA). The GOES-R program office, the NOAA Global Systems Divisions (GSD), and the National Severe Storms Laboratory provided travel stipends for our participants from NWS forecast offices and television stations nationwide.

Visiting scientists this week included Steve Albers (GSD), John Cintineo (Univ. of Wisconsin/CIMSS), Ashley Griffin (Univ. of Maryland), Chris Jewett (Univ. of Alabama – Huntsville), James McCormick (Air Force Weather Agency), Chris Schultz (Univ. of Alabama – Huntsville), and Bret Williams (Univ. of Alabama – Huntsville).

Darrel Kingfield 
was the weekly coordinator.  Lance VandenBoogart (WDTB) was our “Tales from the Testbed” Webinar facilitator for his last week (we’ll miss you!). Our support team included Kristin Calhoun, Gabe Garfield, Bill Line, Chris Karstens, Greg Stumpf,  Karen Cooper, Vicki Farmer, Lans Rothfusz, Travis Smith, Aaron Anderson, and David Andra.

Some of our EWP Week 3 participants. Many others had to catch their flights early :(
Some of our EWP Week 3 participants. Many others had to catch their flights early 🙁

Feedback on Experimental Products:

Synthetic Imagery (simulated satellite via NSSL WRF):
  • The technique is great, to be able to visualize what the satellite would look like if this model were to pan out.

  • Sometimes it got the correct position and missed the timing, sometimes it got the timing and missed the position.
  • Would be great to integrate with other model solutions.
Nearcast:
  • I liked that you could see the areas of higher instability. When I used it in conjunction with the ProbSevere or CI products, I found it helpful to see if cumulus field is developing in a region of higher instability.

  • The wide spectrum of colors allowed for the analysis/tracking of instability gradients
  • Lots of blank spots due to cloud cover, becomes hard to use in heavily overcast conditions. Would be great to blend with other NWP solutions.
  • I could see myself going to this product between warnings see where the Theta-E gradient is located.
  • Could be used to fill the spatial gap between sounding data sites.
 GOESR Convection Initiation – probabilities:
  • Really helped focus my attention to specific regions of favorable convection and block out other regions for now.
  • Could catch a forecast off guard if he/she is unaware of the deficiencies (e.g. poor detection capability under cirrus).
  • Mixed results depending on isolated initiation versus new initiation in a complex convective setup.
  • I’m unsure how well GOES West is performing compared to GOES East.
ProbSevere –
  • I liked the verbal annotations next to the metadata (e.g. Moderate, Strong)
  • After initial hesitation, the algorithm performed really well today (5/20) and I was confident in using it as guidance.
  • Highlighted storm collapse, helping me not issue warnings on storms that were weakening.
  • I could see this catching things a human would miss but we should still use it in conjunction with base data.
Overshooting Tops:
  • The 15-30 minute updates made it difficult to use, I’d be more receptive to 1 minute updates.
  • I see more overshooting tops than what the algorithm is detecting but see its potential when there is no visible imagery (i.e. nighttime).
  • I could see this filling the gap in a data void region without radar.
Super-Rapid Scan (SRSOR), 1-min imagery:
  • 1 minute data allowed me to gain more confidence in what I was seeing. I saw convective attempts, failures, and dead anvils…something I could not see as clear without SRSOR imagery.
  • The detail seen in convective development was phenomenal, I could stare at this all day.
  • I was able to visually identify boundaries feeding into the storm faster than what radar could provide to me.
pGLM –
  • CG information did not tell the whole picture, it was great to see the ups and downs in electrification.
  • I could definitely integrate this into warning operations.
  • Definitely helps in monitoring the health of the updraft pulse.
  • I factored the sigma jumps into my decision making process and overall it performed well.
SPORT tracking tool:
  • I think it has potential but the bugs/freezing when loading a lot of data made it difficult to use.
  • I tracked base velocity with this tool and when multiple meteograms started popping up, I just closed it out.
  • I liked how the path prior trajectory changed by moving a single circle but it will take some time for me to do this faster. Unsure how to integrate this into fast-paced convective modes.
vLAPS :
  • Seemed to overproduce convection consistently but I found the instability parameters useful.
  • Wind and Theta-E fields would be an added benefit.
  • Odd features seemed to propagate near the edges of the domain, which sometimes made the small domain products difficult to use.
  • Composite reflectivity seemed way to hot to use this week.

-Darrel Kingfield, EWP Week 3 Coordinator

Tags: None

Week 2 Summary

This week, the EWP had forecasters from the Louisville, Buffalo, and  Norman WFO’s, as well as a broadcast meteorologist from WUSA (DC CBS affiliate) participate in the Big Spring Experiment. Operations on Monday began in the Davenport and St. Louis CWA’s. Throughout the week, operations slowly shifted eastward as we evaluated the products with severe weather development along an eastbound cold front. These operations included the Detroit, Cleveland, Wilmington, Charleston WV, Pittsburgh, and Sterling CWA’s. One group on Thursday operated in the Shreveport CWA, where marginal severe weather occurred as an upper level disturbance moved through a region characterized by weak low-level moisture but steep lapse rates and only marginal instability. This unique environment posed some interesting forecast challenges, so it was neat to see how the various satellite products and OUN WRF performed.

Participants were able to use all of the demonstration products this week, which included  GOES-R and lightning products, LAPS fields, and the OUN WRF model finally on Thursday. There were many good blog posts written throughout the week highlighting the use of all of these products in various situations across various regions of the US. Below is some end-of-the-week feedback on each product from this weeks participants:

GOES-R

Simulated Satellite Imagery:

  • This gave me a heads up on where clouds would move. There isn’t great guidence for sky grids, so I would look at this to see where stratus is moving, etc. if it was verifying well
  • I think it is especially effective on the large scale because it picks up on large scale features well.

NearCast System:

  • I liked and used it because it is observed thermodynamic data, of which there is very little
  • This added value to my forecast process. For example, in West Virginia no boundary was evident at the surface, but there was a boundary in NearCast, and that is where convection fired. That sold me.
  • I do not like to rely on NWP data, so this was nice.
  • I really liked seeing the gradients, most of the storms developed in theta-e difference minima or moisture maxima or along gradients.
  • There were a few cases where you saw decreasing moisture moving in, which was not picked up in the models, and it did have a big effect on storm development.
  • In Wilmington, dry air moved in and storms decreased, but they actually did increase a later, so it was kind of inconclusive in this case.

GOES-R Convective Initiation

  • Some times it was giving lead time of 30-45 minutes, other times it provided no lead time.
  • It was more useful in rapid scan mode.
  • I was very impressed with its performance, but sometime the lead timne just wasn’t worth it.
  • I thought this product was really great during the daytime, but I do not see it being at all useful at night as it was very inaccurate.
  • It was sometimes hard to get a sense of what the probs meant. If I used it, I would get rid of everything under 50%. I just don’t like that much clutter.
  • It was particularly erratic around the Appalachian mountains.

Prob Severe Model

  • It works awesome in hail situations. I am a fan of it for hail detection and determining which storms will produce hail
  • It does have issues with linear storm modes.
  • The best part for me was teh moiseover sampling and being able to look at the predictors. It really enhances your situational awareness
  • It would be nice to color code the growth rates in the readout
  • I noticed a lot of sat growth rates that were older than an hour, that mad eme lose confidence in the signal.
  • I think it did increase my confidence in hail events, because I was saw a clear progression in probabilities
  • When I saw over 80%, I had great confidence that that storm would become severe
  • I do think it could give additional lead time to warnings
  • I am fine with including the lower probs because the display is not obtrusive, and I like seeing the progression to higher probs.
  • The survey questions were good
  • I think what you have now, for hail I would use this product today.
  • It gives you a good idea of which storm(s) you should be interrogating
  • All participants agreed they would use this in their local WFO.
  • Broadcaster: I would use this on the air. If there were a lot of cells, I would point to this storm [with the higher probs] and say that that is the cell to watch. Would not necessarily show probs, but could show colors, etc.

Overshooting Top Detection

  • This was not useful for me.
  • I see this being most useful when incorporated into another product. This would be a great benefit
  • We were unable to use it at night when it is harder to see OT’s, and when many more OT’s are often detected as storms have matured.

PGLM

  • I really like the total lightning data
  • I’ve never used total lightning, bit I do like it

Lightning Jump Algorithm

  • I think I could use this in a warning environment.
  • I don’t mind the sigma values as indicators.
  • An outline (like prob severe)  might be better then the blob
  • It might be good to incorporate the LJ product in the prob severe tool
  • I don’t see the zero sigma being necessary
  • I told AWIPS-II to blink sigma values that were greater than 2.

Tracking Tool

  • There are too many circles on the screen, too much clutter.
  • I would prefer to have one circle that you just put on the cell, and it gives you the meteogram.
  • Entering the cell id # to track the storm might be a good idea
  • I don’t really mind the circles, but I just can’t see myself using this in a warning situation.
  • I can see this being used after the fact, looking at a storm, but not in real-time. It is too labor intensive.
  • I like the graph itself, but the actual functionality is bad.

  • It is difficult to move the circle-track to align with the track of the storm, especially when many images are loaded. Also, sometimes it does not track at first, so you have to move it around to get it to track. Finally, changing the size of the circles is frustrating, as making some circles bigger makes other smaller.

GOES-14 SRSOR (1-minute imagery)

  • It’s great
  • I saw subtle boundaries that I wouldn’t otherwise see
  • We want quicker satellite updates, it’s a no-brainer
  • No worry about information overload with this
  • I will prefer to view the raw data, but I do see it being useful as input into other products as well

LAPS

  • It seemed to do pretty well with storm mode.
  • Timing of convection was poor.
  • I used reflectivity and CAPE. I would like to see max wind speed (10 m).
  • I would like to use this in lake effect snow situations.

OUN WRF

  • Initially, it produced a little too much covnection, but throughout the day, it caught up.
  • The model picked up gravity waves which was neat to see.
  • It was interesting to see the progression from single cells to clusters/line segments.
  • Storm mode was good, exact location was just a bit off.
  • I am content with the products that are available.
  • 10 m max wind speed was interesting to look at. In one case, it worked out quite well.

Other:

  • I thought the training was good.
  • The week was very well organized, well done, and I liked that we stuck to the schedules, it made things very easy.
  • I liked the relaxed environment
  • Less structure was good, it gave us freedom to see what works well for us.

– Bill Line, SPC/HWT Satellite Liaison and Week 2 EWP Coordinator

Tags: None

EWP2013 Week 3 Summary: 20 – 24 May 2013

EWP2013 PROJECT OVERVIEW:

The National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) in Norman, Oklahoma, is a joint project of the National Weather Service (NWS) and the National Severe Storms Laboratory (NSSL).  The HWT provides a conceptual framework and a physical space to foster collaboration between research and operations to test and evaluate emerging technologies and science for NWS operations.  The Experimental Warning Program (EWP) at the HWT is hosting the 2013 Spring Program (EWP2013).  This is the sixth year for EWP activities in the testbed.  EWP2013 takes place across three weeks (Monday – Friday), from 6 May through 24 May.

EWP2013 is designed to test and evaluate new applications, techniques, and products to support Weather Forecast Office (WFO) severe convective weather warning operations.  There will be three primary projects geared toward WFO applications this spring, 1) evaluation of multiple CONUS GOES-R convective applications, including pseudo-geostationary lightning mapper products when operations are expected within the Lightning Mapping Array domains (OK/west-TX, AL, DC, FL), 2) evaluation of model performance and forecast utility of the OUN WRF when operations are expected in the Southern Plains, and 3) evaluation of model performance and forecast utility of the 1-km and 3-km WRF initialized with LAPS.

PARTICIPANTS:

Our participants included Eric Martello (WFO Fort Worth, TX), Ashlie Sears (WFO New York, NY), Jeremy Wesely (WFO Hastings, NE), Kris White (WFO Huntsville, AL), and Andrew Zimmerman (WFO Wakefield, VA).  The GOES-R program office, the NOAA Global Systems Divisions (GSD), and NWS WFO Huntsville’s Applications Integration Meteorologist (AIM) Program have generously provided travel stipends for our participants from NWS forecast offices nationwide.

Other visitors included Jason Otkin (Univ. Wisconsin), Steve Albers (NOAA/GSD), Chad Gravelle (NWS Training Center GOES-R Liaison), James McCormick (Air Force Weather Agency, Omaha, NE), Lori Schultz (University Alabama – Huntsville), and Jim Gurka (NOAA/NESDIS/GOES-R).

Greg Stumpf was the weekly coordinator.  Clark Payne (WDTB) was the “Tales from the Testbed” Webinar facilitator. Our support team also included Darrel Kingfield, Gabe Garfield, Travis Smith, Chris Karstens, Kristin Calhoun, Kiel Ortega, Karen Cooper, Aaron Anderson, and David Andra.

REAL-TIME EVENT OVERVIEW:

20 May:  Norman (OUN), Fort Worth (FWD):  Post-Moore Tornado supercell storms in southern OK and northern TX.  NOTE: The Moore tornado occurred too early in the shift and network outages cut off our live data feed.  However, the historical track products (e.g., Rotation Tracks, Tornado Debris Signature Tracks) were viewed after the event.

21 May:  Shreveport (SHV), Fort Worth (FWD), Albany (ALY), Binghamton (BGM):  Severe storms with hail in TX and LA, followed by marginally severe storms in upstate NY.

22 May:  Buffalo (BUF), Binghamton (BGM):  Supercells produced large hail in upstate NY.

23 May:  Amarillo (AMA), Lubbock (LUB, Midland (MAF):  Supercells with large hail and a few weak tornadoes in the TX Panhandle.

FEEDBACK ON EXPERIMENTAL PRODUCTS:

HSDA:

  • A logical progression of the HCA product, but useful than the current version.
  • Works well with MESH products, although seems to underestimate the hail size versus MESH.

MRMS:

  • Likes instantaneous MESH / height about -20C / -10C (lightning)
  • Tracks difficult to use for slow-moving cells; best for fast-moving storms.
  • A lot of the MRMS stuff makes derived stuff obsolete.
  • Track orientations give a lot of utility.
  • MRMS was considered by some forecasters to be the most useful new products for warning ops.
  • One forecaster thought it was outstanding and will be heavily used in the field.
  • Rotation Track product for the Moore tornado could be used to help determine operational warning thresholds.
  • The 2-minute temporal resolution is advantageous, potentially buying lead time.  This could potentially keep forecasters from switching back-and-forth from radar to radar and saving time from looking at all-tilts so much
  • Helps with the overall SA and the efficiency of the warning process.  With multiple deep cells in the area, products like the MESH, and -20 C Reflectivity can be very helpful for determining the cells on which to focus.
  • Was useful during a temporary radar outage, as data from adjacent radars was used to fill in the storm without having to analyze separate radar feeds.
  • Offered a quicker diagnostic overview of the storm than having to look at multiple height scans as we commonly do with the Donovan method.
  • Storms near or over a radar within a cone-of-silence had data which was effectively filled in with other radars.
  • Verified that MRMS products matched very closely the values one would get using manual data interpretation using all-tilts and sampling.

OUN WRF:

  • 0-1 km SRH fields received a lot of praise.
  • Recommend adding an updraft helicity track.

Variational LAPS:

  • The 15-minute temporal resolution of the product can be very useful for diagnosing locations of continued convection especially in rapidly developing convective situations.
  • Overdid the storms and created too much outflow.

GOES-R Simulated Satellite:

  • Helps to better visualize how the model is creating and evolving convection.
  • Recommend a product or procedure that will facilitate a comparison to real data.

GOES-R RGB Airmass:

  • Training on RGB airmass needed to be made more adequate.
  • Best when used with other environmental products to see how conducive the airmass was for convective development and maintenance.

GOES-R Nearcast:

  • Theta-E fields pretty good for diagnosing where convection was most likely.

GOES-R UAH SatCast/UW Cloud-Top Cooling:

  • UAH CI started out well on one event, but then began to struggle near the boundaries.
  • Still unsure of the utility to issue advanced warnings with the CTC product.  However, could be used for significant weather advisories and pre-warning products.
  • Training/best practices need to be developed that match the use of the CTC and CI products to near-storm environment, as performance varies with varying NSE.
  • There is utility in CI product for situational awareness and cell maintenance with MCSs.

GOES-R PLGM and Lightning Trend Tool:

  • Recommend a “drag me to storm” type interface for the trend tool.
  • Trend tool still not user-friendly, is time consuming to use.
  • Works well for DSS of NWS issued a lightning product/forecast, but not necessarily hail or tornado warnings yet.

OVERALL COMMENTS:

  • The Articulate training modules received a lot of praise.

CONTRIBUTORS:

Greg Stumpf, EWP2013 Week #3 Weekly Coordinator; EWP2013 Operations Coordinator

Travis Smith, EWP2013 Week #3 Backup Coordinator

 

Tags: None

EWP2013 Week 2 Summary: 13 – 17 May 2013

EWP2013 PROJECT OVERVIEW:

The National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) in Norman, Oklahoma, is a joint project of the National Weather Service (NWS) and the National Severe Storms Laboratory (NSSL).  The HWT provides a conceptual framework and a physical space to foster collaboration between research and operations to test and evaluate emerging technologies and science for NWS operations.  The Experimental Warning Program (EWP) at the HWT is hosting the 2013 Spring Program (EWP2013).  This is the sixth year for EWP activities in the testbed.  EWP2013 takes place across three weeks (Monday – Friday), from 6 May through 24 May.

EWP2013 is designed to test and evaluate new applications, techniques, and products to support Weather Forecast Office (WFO) severe convective weather warning operations.  There will be three primary projects geared toward WFO applications this spring, 1) evaluation of multiple CONUS GOES-R convective applications, including pseudo-geostationary lightning mapper products when operations are expected within the Lightning Mapping Array domains (OK/west-TX, AL, DC, FL), 2) evaluation of model performance and forecast utility of the OUN WRF when operations are expected in the Southern Plains, and 3) evaluation of model performance and forecast utility of the 1-km and 3-km WRF initialized with LAPS.

PARTICIPANTS:

Our participants included Michael Scotten (WFO, Norman, OK), Joey Picca (WFO, New York, NY), Ernie Ostuno (WFO, Grand Rapids, MI), Becca Mazur (WFO, Cheyenne, WY), and Chris Leonardi (WFO, Charleston, WV).  The GOES-R program office, the NOAA Global Systems Divisions (GSD), and NWS WFO Huntsville’s Applications Integration Meteorologist (AIM) Program have generously provided travel stipends for our participants from NWS forecast offices nationwide.

Other visitors included Jordan Gerth (Univ. Wisconsin), Wayne Feltz (Univ. Wisconsin), Hongli Jiang (NOAA/GSD), Amanda Terberg (NWS Air Weather Center GOES-R Liaison), and Helge Tuschy (Deutscher Wetterdienst (DWD; Leipzig, Germany).

Kristin Calhoun was the weekly coordinator.  Clark Payne (WDTB) was the “Tales from the Testbed” Webinar facilitator. Our support team also included Darrel Kingfield, Gabe Garfield, Travis Smith, Chris Karstens, Greg Stumpf, Kiel Ortega, Karen Cooper, Lans Rothfusz, Aaron Anderson, and David Andra.

ewp2013_week2_inset

The Experimental Warning Program week #2 group photo: 1) Kiel Ortega (CIMMS/NSSL), 2) Amanda Terborg (UW/CIMSS), 3) Becca Mazur (NWS Cheyenne, WY), 4) Jordan Gerth (UW/CIMSS), 5) Chris Leonardi (NWS Charleston, WV), 6) Joey Picca (NWS New York, NY), 7) Helge Tuschy (DWD, Leipzig, Germany), 8 ) Ernie Ostuno (NWS Cheyenne, WY), 9) Gabe Garfield (CIMMS/NWS Norman, OK), 10) Michael Scotten (NWS Norman, OK), 11) Chris Karstens (CIMMS/NSSL), 12) Jim LaDue (NWS/WDTB), 13) Hongli Jiang (NOAA/GSD), and 14) Kristin Calhoun (CIMMS/NSSL).

REAL-TIME EVENT OVERVIEW:

13 May:  Missoula, MT (MSD) and Great Falls, MT (TFX):  marginally severe storms w/downbursts and hail as primary risk.

14 May:  San Angelo, TX (SJT) and Midland, TX (MAF): Pulse and multi-cellular strong to severe storms in area of dry line circulation.  // Des Moines, IA (DMX) mesoscale evaluation surrounding frontal boundary across western Great Lakes – primary threat of high winds.

15 May:  Norman, OK (OUN) and Dallas-Ft. Worth (FWD):  This was the primary severe weather event of the week.  Embedded severe convection in OUN, isolated tornadic supercells in FWD.

16 May:  Boulder, CO (BOU), Goodland, KS (GLD) and North Platte, NE (LBF): Marginally severe short-lived storms.

FEEDBACK ON EXPERIMENTAL PRODUCTS:

HSDA:

  • Algorithm performed the best during the supercell event in the FWD domain where there were large swaths of giant hail associated with storms in TX.
  • More confidence and better verification when region of hail is greater than 1-2 pixels.  Forecasters suggest using an area or volume approach to algorithm evaluation.
  • HSDA tended to over-predict Giant Hail throughout the week.
  • Trend products would be useful.
  • HSDA is not level II resolution.  It’s also “cleaned up” in AWIPS2.

MRMS:

  • Rotation tracks were commonly utilized by the warning forecasters.  Particularly useful in determining polygon orientation (size / shape) — this reduced FAR area.
  • Rotation tracks and AzShear products were time savers, particularly for situational awareness (calling attention to storms /locations).
  • Tornado debris algorithm did not have region of low CC/ZDR co-located with SRM couplet and tornado on 15 May (possible error in algorithm).
  • MESH and HSDA provided a useful combination in hail size estimation and confidence.
  • A product that shows how many radars are being used to estimate the data in each grid point would be useful.
  • MRMS hail indicators would be variable in the mountains – terrain may affect calculations (AGL versus MSL?) or beam blockage causing some radars to not go into the estimates all the time.
  • The MRMS “thickness” products saved much time versus trying to estimate these using all-tilts and sampling.
  • Normally get height from sounding, and apply to entire domain.  MRMS takes a lot of the guessing and legwork out, instant answer.  NSE updates hourly versus 12-hourly
  • Still good at very little radar coverage at edges of network (e.g., Big Bend) since reflectivity it aloft anyway.
  • Really useful when you get rapid developments and descent of elevated cores for pulse storms
  • MRMS MESH better on low end days, HSDA better on high end days.

OUN WRF:

  • Model reflectivity time ensemble (“d prog dt”) would be helpful.
  • Improved from last year, especially with convection timing.
  • Note that this year, the OUN WRF was changed from an hourly run to two hourly runs.  Cycling radar data, and changed microphysics scheme and PBL scheme.

Variational LAPS:

  • The instability field was quite useful when combined with a radar mosaic.  Particularly early on in the FWD domain, this field provided clues to when storms would decrease or increase in intensity with time.
  • The 22 UTC 1-km forecast was incredibly accurate 2 hrs out (on 15 May).  The updraft helicity product was useful in visualizing the location/strength of the activity.
  • A 0-6 km shear product is recommended.
  • Would be nice to produce similar fields as OUN WRF to compare side by side
  • Recommend adding a CIN product.
  • Recommend adding more 1km floaters outside the OUN WRF domain.
  • Useful for return moisture flow for subsequent convection episodes.
  • The stormscale and temporal scale of variational laps is far superior to what’s available at the WFOs right now.

GOES-R Simulated Satellite:

  • Comparisons of simulated IR with actual IR provided a quick understanding of where and how model (NSSL-WRF) may be handling convection:  e.g., low-cloud cover in simulated IR > enhanced heating > convection initiated too early / too widespread.
  • The models were still slightly faster than the synthetic imagery.
  • Doesn’t handle anvils very well, but that could be a good thing.  Can see where storms are developing without the anvils covering it up.
  • There are plans to add a sky cover grid, to apply to other models.

GOES-R UAH SatCast/UW Cloud-Top Cooling:

  • Both products need to be combined with environmental knowledge.  Storms / regions cycled through multiple signals in CTC.
  • CI had poor verification in mountainous terrain (60-70% + FAR) due to snow contamination.
  • Cirrus contamination continues to be a problem.
  • False CI detections on leading edges of anvils for fast moving storms are a problem at times.
  • A threshold cutoff of display filter for the UAH CI is recommended.  Anything above 70-80% seems to work best.

GOES-R NEARCAST:

  • Interesting to compare CAPE field with LAPS and OUN-WRF modeled field for examining different scenarios and likely outcomes.  Overlap on 15 May with LAPS was good and helped inspire confidence.
  • Low precipitable water values were useful in determining that storms were unlikely to grow in strength or coverage.
  • ThetaE difference helped pull focus onto anticipated area of initiation.
  • The Nearcast will be running at the European Severe Storms Laboratory convective weather testbed as well.
  • Prefer that the color table is “flipped” to use warm colors for high moisture
  • The GOES-E and –W images were not seamless.

GOES-R PLGM and Lightning Trend Tool:

  • Peak flash extent densities were useful in picking out which storms deserved more attention and providing additional lead time for severe weather at the ground.
  • MESH and lightning flash rates had comparable trends in the OK region, with the flash rates typically preceding MESH by ~1-2 min and reports on the ground by ~15 min.
  • Prefer a “less-harsh” color table (grays for lower flash rates better than bright colors).  The choice of color (dark green) for the key threshold 30 flash per minute rate doesn’t stand out well.  In addition, the yellow-to-green transition is not easy for threshold determination.
  • Data dropouts from LMA station failures were difficult to recognize and separate from actual decreases in storm flash rate.  Recommend adding a “number of sources” product to identify data dropouts.
  • Lightning trend tool (“moving trace”) is useful for pinpointing developing cores quickly using lightning rate increases.
  • All forecasters found that manually adjusting each frame in the trend tool tended to be tedious and needed an “apply to all” frames type option for size and spacing.
  • Forecasters found trend tool to be too difficult and time-consuming to use in realtime difficult to use, but a good tool for post event evaluation.
  • Would like to see similar graphical display from other data or algorithms (e.g., MESH, VIL) on same trend graph.
  • Recommend that the default output format for the trend tool to be uninterpolated.
  • Need an “Apply to all” feature in the trend tool.

OVERALL COMMENTS:

  • Forecasters suggested that researchers may want to limit experimental product evaluation to one or two products for each team at a time.
  • Perhaps assign different teams to different products.
  • Comparisons between model (LAPS and OUN-WRF) reflectivity fields and observed reflectivity were often used to quickly gather which solution is trending in a better direction.

CONTRIBUTORS:

Kristin Calhoun, EWP2013 Week #2 Weekly Coordinator

Greg Stumpf, EWP2013 Operations Coordinator

 

Tags: None

EWP2013 Week 1 Summary: 6 – 10 May 2013

EWP2013 PROJECT OVERVIEW:

The National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) in Norman, Oklahoma, is a joint project of the National Weather Service (NWS) and the National Severe Storms Laboratory (NSSL).  The HWT provides a conceptual framework and a physical space to foster collaboration between research and operations to test and evaluate emerging technologies and science for NWS operations.  The Experimental Warning Program (EWP) at the HWT is hosting the 2013 Spring Program (EWP2013).  This is the sixth year for EWP activities in the testbed.  EWP2013 takes place across three weeks (Monday – Friday), from 6 May through 24 May.

EWP2013 is designed to test and evaluate new applications, techniques, and products to support Weather Forecast Office (WFO) severe convective weather warning operations.  There will be three primary projects geared toward WFO applications this spring, 1) evaluation of multiple CONUS GOES-R convective applications, including pseudo-geostationary lightning mapper products when operations are expected within the Lightning Mapping Array domains (OK/west-TX, AL, DC, FL), 2) evaluation of model performance and forecast utility of the OUN WRF when operations are expected in the Southern Plains, and 3) evaluation of model performance and forecast utility of the 1-km and 3-km WRF initialized with LAPS.

PARTICIPANTS:

We had six visiting NWS forecasters this week: Marc Austin (WFO, Norman, OK), Hayden Frank (WFO, Boston, MA), Jonathan Guseman (WFO, Lubbock, TX), Nick Hamphsire (WFO, Fort Worth, TX), Andrew Hatzos (WFO, Wilmington, OH), and Jonathan Kurtz (WFO, Norman, OK).  The GOES-R program office, the NOAA Global Systems Divisions (GSD), and NWS WFO Huntsville’s Applications Integration Meteorologist (AIM) Program have generously provided travel stipends for our participants from NWS forecast offices nationwide.

Visiting scientists this week included Lee Cronce (Univ. Wisconsin), Geoffrey Stano (NASA-SPoRT), Isidora Jankov (NOAA/GSD), and Amanda Terberg (NWS Air Weather Center GOES-R Liaison).

Gabe Garfield was the weekly coordinator.  Clark Payne (WDTB) was the “Tales from the Testbed” Webinar facilitator. Our support team also included Darrel Kingfield, Kristin Calhoun, Travis Smith, Chris Karstens, Greg Stumpf, Kiel Ortega, Karen Cooper, Lans Rothfusz, Aaron Anderson, and David Andra.

ewp2013_week1_photo_inset

Experimental Warning Program Week #1 group photo. 1) Isidore Jankov (NOAA/ESRL/GSD), 2) Kristin Calhoun (CIMMS/NSSL), 3) Gabe Garfield (CIMMS/NWS/WFO/OUN), 4) Darrel Kingfield (CIMMS/NSSL), 5) Geoffrey Stano (NASA/SPoRT), 6) Nick Hampshire (NWS/WFO/Fort Worth, TX), 7) Lee Cronce (U. Wisc/CIMSS), 8 ) Hayden Frank (NWS/WFO/Boston, MA), 9) Greg Stumpf (CIMMS/NWS-MDL), 10) Marc Austin (NWS/WFO/Norman, OK), 11) Amanda Terborg (U. Wisc./CIMSS), 12) Jonathan Guseman (NWS/WFO/Lubbock, TX), 13) Kiel Ortega (CIMMS/NSSL), 14) Jonathan Kurtz (NWS/WFO/Norman, OK), and 15) Andy Hatzos (NWS/WFO/Wilmington, OH).

REAL-TIME EVENT OVERVIEW:

6 May: Blacksburg (RNK), Raleigh (RAH), and Lubbock (LUB):  Marginally severe storms with hail and wind.

7 May: Goodland (GLD), Dodge City (DDC), and Lubbock (LUB):  Severe storms in KS with large to very large hail; marginally severe storms in the Texas Panhandle.

8 May: Norman (OUN), Lubbock (LUB), and Dodge City (DDC):  Widespread severe weather outbreak with hail and wind.

9 May: Fort Worth (FWD), San Angelo (SJT), and Lubbock (LUB):  Supercells with large to very large hail and severe winds.

FEEDBACK ON EXPERIMENTAL PRODUCTS:

HSDA:

  • Giant hail detections are too generous.  Seems to be indicating giant hail when many reports are below 2”.
  • The algorithm is good at detecting hail, but the area is too large.
  • When using FSI to cut vertical cross-sections, the change to all red (any size hail) above melting layer was particularly noted.

MRMS:

  • MESH underestimated hail size in colder core low on Monday; the algorithm did much better on Thursday.
  • MESH and other hail diagnosis indicators underestimate hail size for left splits.
  • Recommend adding a radial shear product, to help detect mid-altitude radial convergence (MARC) signatures.
  • Has the greatest value in identifying hail threats; however, tornadoes will still always require examination of base data.

OUN WRF:

  • Recommend a new convective initiation product that displays the probability of convection.
  • Recommend a surface convergence product to monitor CI (there is a Severe Local Storms conference preprint by a Fort Worth WFO author).

Variational LAPS:

  • Some WFOs deliver an enhanced short term forecast every 3 hours, and variational LAPS would be useful.

GOES-R UAH SatCast/UW Cloud-Top Cooling:

  • The UAH CI product was found to be “too noisy”
  • Would like to change color curve to filter out UAH CI Strength of Signal values below 50% (or some other threshold).

GOES-R PLGM and Lightning Trend Tool:

  • Recommend adding capability to use trending tool in four panels.  Right now, it only works in a single panel.

OVERALL COMMENTS:

  • Suggest that the PIs sit with forecasters to write procedures on the first part of the first day of operations.
  • All the products are useful and valuable; each forecaster will find their own ways to implement.
  • Need to work with WDTB to determine an optimal way to attract forecasters to the new products so that they do not immediately default to their “comfort zones” of base products.  Perhaps having an assistant forecaster or HMT monitor the new products side-by-side with the warning forecaster.
  • Suggests starting week by incorporating experimental products and concluding week with just experimental products.  However, many weeks won’t have quality storms each day.
  • New product evaluation would probably not be helpful in a “canned case” – need real-time evaluation.
  • Time allotted for surveys was adequate.
  • Training was sufficient to prepare for week’s activities.

CONTRIBUTORS:

Gabe Garfield, EWP 2013 Week #1 Weekly Coordinator

Greg Stumpf, EWP 2013 Week #1 Back-up Coordinator

Tags: None

EWP2012 Week 5 Summary: 11-15 June 2012

EWP2012 PROJECT OVERVIEW:

The National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) in Norman, Oklahoma, is a joint project of the National Weather Service (NWS) and the National Severe Storms Laboratory (NSSL).  The HWT provides a conceptual framework and a physical space to foster collaboration between research and operations to test and evaluate emerging technologies and science for NWS operations.  The Experimental Warning Program (EWP) at the HWT is hosting the 2012 Spring Program (EWP2012).  This is the fifth year for EWP activities in the testbed.  EWP2012 takes place across five weeks (Monday – Friday), from 7 May through 15 June.  There are no operations during Memorial Day week (28 May – 1 June).

EWP2012 is designed to test and evaluate new applications, techniques, and products to support Weather Forecast Office (WFO) severe convective weather warning operations.  There will be three primary projects geared toward WFO applications this spring, 1) evaluation of 3DVAR multi-radar real-time data assimilation fields being developed for the Warn-On-Forecast initiative, 2)  evaluation of multiple CONUS GOES-R convective applications, including pseudo-geostationary lightning mapper products when operations are expected within the Lightning Mapping Array domains (OK/west-TX, AL, DC, FL), and 3) evaluation of model performance and forecast utility of the OUN WRF when operations are expected in the Southern Plains.

WEEK 5 SUMMARY:

We had six visiting NWS forecasters this week: Tim Tinsley (WFO, Brownsville, TX), Michael Dutter (WFO, Marquette, MI), Ty Judd (WFO, Norman, OK), Steve Nelson (WFO, Peachtree City, GA), Randy Skov (CWSU, Atlanta, GA), and Jeff Garmon (WFO, Mobile, AL).  Once again, we were all over the map with our severe weather events.  But continuing the trend of a quiet spring, there were no large severe weather outbreaks.

Photo:  1)  Chris Siewert (CIMMS/SPC/GOES-R)c, 2)  Travis Smith (CIMMS/NSSL), 3)  Randy Skov (CWSU, Atlanta, GA), 4)  Ty Judd (WFO, Norman, OK), 5)  Steve Nelson (WFO, Peachtree City, GA), 6)  Michael Dutter (WFO, Marquette, MI), 7)   Tim Tinsley (WFO, Brownsville, TX), 8)  Gabe Garfield (CIMMS/WFO Norman, OK), 9) Jeff Garmon (WFO, Mobile, AL), and 10)  Jordan Gerth (UW-CIMSS). Photograph by Greg Stumpf (CIMMS/NWS-MDL).


REAL-TIME EVENT OVERVIEW:

11 June: Memphis (MEG), Huntsville (HUN), Tulsa (OK), San Angelo (SJT), Jackson (JAN), Birmingham (BMX)

12 June: Albuquerque (ABQ), Midland (MAF), Lubbock (LBB), Amarillo (AMA)

13 June: Albuquerque (ABQ), Midland (MAF), Sioux Falls (FSD), Fort Worth (FWD)

14 June: Minneapolis (MPX), Sioux Falls (FSD), Hastings (GID), Omaha (OAX), Amarillo (AMA)

FEEDBACK ON EXPERIMENTAL PRODUCTS:

3DVAR:

  • for ease of comparison, color tables should match other experimental products
  • storm top divergence was helpful to determine if storm would strengthen
  • four panel of divergence and updraft intensity is helpful
  • MESH seemed to peak 5-10 minutes after maximum in updraft intensity/divergence
  • suggest visualizing divergence/convergence/vorticity in an “all-tilts” format
  • use “AGL” instead of “MSL”
  • need a storm-based four-dimensional storm investigator for 3DVAR
  • integrate CAPPI

OUN WRF:

  • generally good forecast
  • cold pool event on Monday night was forecast well
  • forecasters will need to be trained how to use it: don’t want them to dismiss the model because of timing and placement issues.
  • model trends were helpful
  • on Wednesday, forecasters felt convection was going to happen; OUN WRF developed nothing, and verified
  • hourly column hail product worked well on at least two days
  • updraft helicity was a little noisy
  • did well on Thursday in the Dallas area
  • it would be helpful if the color tables matched the 3DVAR
  • suggest using high-resolution models to write aviation forecasts
  • model might also be useful for lake-effect snow events, to determine future position of important boundaries
  • suggest that more high-resolution models be set up; one for each region

GOES-R NearCast:

  • useful up to 3-6 hours into the future — then, not so good
  • on Thursday, showed unstable airmass northwest of Hastings after the front had clearly moved through
  • to its credit, kept instability in central area when OUN WRF had moved it to the south
  • theta-e difference product was considered to be the most accessible product
  • proposed a merger of GOES-East and GOES-West to cover gap in coverage
  • difficulty in using pressure changes owing to the changes in elevation from the Plains to the Rockies
  • NearCast CAPE closely agrees/associates well with the model CAPE
  • color scales for products could use contrast enhancement

GOES-R UAH SatCast/UW Cloud-Top Cooling:

  • SatCast useful product, but might be good to filter lower signal
  • some forecasters thought they would still appreciate seeing the low values in the CI product
  • products complement each other
  • products do not seem as useful over higher terrain
  • some issues with CI false alarm rate, even when signal approached 70%
  • CI false alarms decreased dramatically for signal greater than 80 or 90%
  • neverthless, CI helped focus attention on areas that needed to be watched
  • many detections in CI product can be overwhelming
  • a few times, a lead time of 50 minutes was observed from convective initiation to hail observation
  • categorical probabilities for CI is not preferred
  • would like to see trends for each CI probability maxima; perhaps, a trend graph
  • the timing of the full disk scan was frustrating; it occurs at the wrong time (during peak initiation time)
  • might be helpful to see verification climatology

GOES-R PGLM:

  • warned sooner on Monday in Huntsville, because of lightning jump

There are more GOES-R feedback details on the GOES-R HWT Blog Weekly Summary.

OVERALL COMMENTS:

  • orientation was much better/organized this year
  • pleased with the ability to change domains on the fly
  • WES case was good, but it took a while to install
  • another WES case might have been helpful
  • it might be helpful to have a 2-3 hour (shorter) displayed real-time WES case at the experiment
  • some difficulty setting up AWIPS procedures
  • suggest having a “mentor” guide visiting forecaster through training on Mondays

CONTRIBUTORS:

Gabe Garfield, EWP2012 Week #5 Weekly Coordinator

Tags: None

EWP2012 Week 4 Summary: 4-8 June 2012

EWP2012 PROJECT OVERVIEW:

The National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) in Norman, Oklahoma, is a joint project of the National Weather Service (NWS) and the National Severe Storms Laboratory (NSSL).  The HWT provides a conceptual framework and a physical space to foster collaboration between research and operations to test and evaluate emerging technologies and science for NWS operations.  The Experimental Warning Program (EWP) at the HWT is hosting the 2012 Spring Program (EWP2012).  This is the fifth year for EWP activities in the testbed.  EWP2012 takes place across five weeks (Monday – Friday), from 7 May through 15 June.  There are no operations during Memorial Day week (28 May – 1 June).

EWP2012 is designed to test and evaluate new applications, techniques, and products to support Weather Forecast Office (WFO) severe convective weather warning operations.  There will be three primary projects geared toward WFO applications this spring, 1) evaluation of 3DVAR multi-radar real-time data assimilation fields being developed for the Warn-On-Forecast initiative, 2)  evaluation of multiple CONUS GOES-R convective applications, including pseudo-geostationary lightning mapper products when operations are expected within the Lightning Mapping Array domains (OK/west-TX, AL, DC, FL), and 3) evaluation of model performance and forecast utility of the OUN WRF when operations are expected in the Southern Plains.

WEEK 4 SUMMARY:

Week #4 travel support for visiting forecasters was provided by NSSL, the GOES-R program office, and the NWS Pilot Program.   We had six visiting NWS forecasters this week:  Marc Austin (WFO, Norman, OK), Rich Grumm (WFO, State College, PA), Chris Leonardi (WFO, Charleston, WV), Jennifer Palucki (WFO, Albuquerque, NM), Kristen Schuler (CWSU, Kansas City, MO), and Gary Skwira (WFO, Lubbock, TX).  Other visiting participants this week included Kathrin Wapler (Deutscher Wetterdienst, Germany), and Chris Karstens (Iowa State University).  The weather this week was once again characterized by severe weather events that were regionally-diverse.  However, there were no notably-exceptional severe weather events.

Photo:  1) Chris Siewert (CIMMS/SPC/GOES-R), 2)  Rich Grumm (WFO, State College, PA), 3)  Kristen Schuler (CWSU, Kansas City, MO), 4)  Jennifer Palucki (WFO, Albuquerque, NM), 5)  Gary Skwira (WFO, Lubbock, TX), 6)  Marc Austin (WFO, Norman, OK), 7)  Chris Leonardi (WFO, Charleston, WV), 8) Gabe Garfield (CIMMS/WFO Norman, OK), 9) Kathrin Wapler (Deutscher Wetterdienst, Germany), 10) Travis Smith (CIMMS/NSSL), 11) Greg Stumpf (CIMMS/NWS-MDL), 12) Chris Karstens (Iowa State University), and 13) Steve Martanaitis (NWS/WDTB). Photograph by Jim LaDue (NWS/WDTB).


REAL-TIME EVENT OVERVIEW:

4 June: Lubbock (LBB), Little Rock (LZK), Amarillo (AMA)

5 June: Norman (OUN), Jacksonville (JAX), Melbourne (MLB), Tallahassee (TAE), Great Falls (TFX)

6 June: Cheyenne (CYS), Boulder (BOU), Fort Worth (FWD)

7 June: Sterling (LWX), Boulder (BOU), Rapid City (UNR), Cheyenne (CYS)

FEEDBACK ON EXPERIMENTAL PRODUCTS:

3DVAR:

  • would like to see downdrafts depicted – for wet/dry microbursts
  • aviation uses:  downdrafts, turbulence (not necessarily related to convective weather)
  • does a pretty good job, with updraft strength.  Tornadoes also good.  But outflow winds are very radar dependent, range depend.  Assimilation methods may help with some of this.
  • would like to see TDWR data included.
  • would like to have a map depicting radar coverage (this is on the web site, but not in AWIPS2)

OUN-WRF:

This was the first week where the EWP has at least some forecasters working somewhere in the OUN WRF all four days.

  • it seems to be most useful for estimating storm mode and initiation time (though initiation sometimes has an early bias).
  • too many cold pool – interactions mess things up once convection is ongoing
  • used mostly pre-convection – then transitioned to radar-based products once convection was ongoing
  • was very useful to distinguish between supercells at day, and squall line at night on April 14
  • every run is so sensitive to the initial conditions.  But still does well on change of mode.  HRRR generated convection too far south on Thursday.  But OUN WRF was a little too far north
  • suggested improvement: add surface, dewpoint, wind flags, etc. To the special products.  To get an idea or sanity check to verify against sfc obs.  QPF would also be useful.

GOES-R Satellite products:

  • Simulated satellite was really useful.  Thought the NSSL WRF was spot-on
  • Nearcast products has agreement with OUN WRF
  • CI: “confetti” in UH product was interesting.  Lots of colors.  Liked the “Ultimate CI” AWIPS2 procedure
  • Returning forecaster really liked the probabilistic data field compared to the yes/no deterministic one shown last year

GOES-R PGLM:

  • was actually used this week – convection in the LMAs helps!
  • need 5- and 15-min products.  Also, color curves, hard to see grey
  • one “lightning jump” observed
  • forecasters would really like to see the 3D LMA data in FSI
  • to be deployed in 2016 (west) and 2017 (east)
  • useful for fire weather – source charges in dust storms creating CG lightning

There are more GOES-R feedback details on the GOES-R HWT Blog Weekly Summary.

OVERALL COMMENTS:

  • training before arriving is better than training all day Monday
  • would like to have more PI interaction to show them where to find the products on day #1, though
  • AWIPS2: better than expected.  Needs 64-bit version to fix multiple memory issues.
  • Liked discussion with developers.  Would like to see uncertainly in the products (when they work best / when they don’t) quantified in some way

CONTRIBUTORS:

Travis Smith, EWP2012 Week #4 Weekly Coordinator

Tags: None

EWP2012 Week 3 Summary: 21-25 May 2012

EWP2012 PROJECT OVERVIEW:

The National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) in Norman, Oklahoma, is a joint project of the National Weather Service (NWS) and the National Severe Storms Laboratory (NSSL).  The HWT provides a conceptual framework and a physical space to foster collaboration between research and operations to test and evaluate emerging technologies and science for NWS operations.  The Experimental Warning Program (EWP) at the HWT is hosting the 2012 Spring Program (EWP2012).  This is the fifth year for EWP activities in the testbed.  EWP2012 takes place across five weeks (Monday – Friday), from 7 May through 15 June.  There are no operations during Memorial Day week (28 May – 1 June).

EWP2012 is designed to test and evaluate new applications, techniques, and products to support Weather Forecast Office (WFO) severe convective weather warning operations.  There will be three primary projects geared toward WFO applications this spring, 1) evaluation of 3DVAR multi-radar real-time data assimilation fields being developed for the Warn-On-Forecast initiative, 2)  evaluation of multiple CONUS GOES-R convective applications, including pseudo-geostationary lightning mapper products when operations are expected within the Lightning Mapping Array domains (OK/west-TX, AL, DC, FL), and 3) evaluation of model performance and forecast utility of the OUN WRF when operations are expected in the Southern Plains.

WEEK 3 SUMMARY:

Week #3 of EWP2012 occurred during 21-25 May.  NSSL and the GOES-R program provided travel funds for four visiting forecasters from the NWS this week:  Matt Hirsch (WFO, Phoenix, AZ), Andy Kleinsasser (WFO, Wichita, KS), Chris McKinney (WFO, Houston, TX) and Gordon Strassberg (CWSU, New York, NY).  Other visiting participants this week included James McCormick (AFWA, Offutt AFB, Omaha, NE), Helge Tuschy (Deutscher Wetterdienst, Germany), Lee Cronce (CIMSS/UW-Madison), Chris Jewett (UAH), and Dan Lindsey (CIRA/CSU).  The weather this week contained a variety of events throughout the country, although none of them were exceptional.  We did get a good regional diversity of cases however, even multiple locations were worked on the same shift.

Group Photo, EWP Week 3

Photo:  1)  Gordon Strassberg (CWSU, New York, NY), 2)  Matt Hirsch (WFO, Phoenix, AZ), 3)  Helge Tuschy (Deutscher Wetterdienst, Germany), 4)  Andy Kleinsasser (WFO, Wichita, KS), 5)  Mark Sessing (NWS/WDTB), 6)  Chris McKinney (WFO, Houston, TX), 7)  Gabe Garfield (CIMMS/WFO Norman, OK), 8)  Chris Siewert (CIMMS/SPC/GOES-R), 9) Travis Smith (CIMMS/NSSL), 10) Chris Jewett (UAH), and 11) Kristin Calhoun (CIMMS/NSSL). Photograph by Greg Stumpf (CIMMS/NWS-MDL).


REAL-TIME EVENT OVERVIEW:

21 May: Amarillo (AMA), Lubbock (LUB) and Albuquerque (ABQ)

22 May: Bismarck (BIS), Grand Forks (FGF), and Aberdeen (ABR)

23 May: (Early) Raleigh (RAH) and Wilmington (ILM); (Late) Omaha (OAX) and Hastings (GID)

24 May: (Briefly, early) Sterling (LWX) and Melbourne (MLB); Des Moines (DMX), La Crosse (ARX), and Kansas City (EAX).

FEEDBACK ON EXPERIMENTAL PRODUCTS:

3DVAR:

  • Tracks of products can provide an understanding of what storms are capable of now considering past reports and associated 3DVAR product values.
  • Even when radar picture was somewhat messy and contained marginal severe signatures, the 4-panel displays of 3DVAR products (updraft and vorticity, including history tracks) made it easier to assess storm trends.
  • Found the wind fields very useful, particularly in the lowest couple of km as it could provide a quick look at shear ahead of storms.  However, it could get difficult to pan through data when interrogating multiple levels, an all tilts relative to radar heights (on radar plane or radar on 3d-grid) could be useful here.
  • Could be quite useful in marginal cases, especially low-level spin-ups, depending on area radar coverage. Suggest incorporating TDWRs if possible.
  • Data latency of 5-6 min somewhat an issue.  Sometimes used for post-warning piece-of-mind. Additional processing speed likely to allow for more use in warning operations.
  • Really looking forward to short-term forecast (15-30-45 min) of similar products.

OUN-WRF:

Forecasters were able to incorporate OUN-WRF data into their forecasts on both Monday and Thursday this week:

  • Found it useful in assessing storm mode and providing short-term situational awareness improvement. Used it less once convection was actually ongoing.
  • Still looking for some type of dprog/dt product or display and sounding display capabilities. (soundings may be available using bufkit and nsharp, though this was not obvious in AWIPS2)

GOES-R SimuSat:

  • Recommend expanding this product to other models besides NSSL-WRF.  Could be huge help for creating sky grids in GFE.
  • There is a systematic bias in the calculation that doesn’t depict high cirrus coverage (e.g., anvils) as well as it does low clouds, however some forecasters found this problem to actually be useful detail.  If changes are made to the display, recommend a low-cloud product.
  • For aviation, may be incredibly useful for IFR/VFR status updates, including timing and density forecasts which can be quite important for large airports such as SFO or the NY region.
  • Dan Lindsay displayed a loop that switched from real satellite to forecast simulated satellite (once I get link, we will add that here).  Provides a way to see evaluate far forecast model  (NSSL-WRF) coverage deviates from current obs at point of switch.

GOES-R Nearcast:

  • Found it captures the atmospheric motion very well (“better than any product”).
  • Theta-E products indicated potential instability and moisture gradients well.
  • Some color-tables appeared flipped… believe warmer colors should typically be used to depict higher instability.
  • Suggest use of on-the-fly differencing so forecasters can choose their own levels for differencing (based on thermodynamic profile).

GOES-R UAH-Convective Initiation/SatCast:

  • Used for both severe and non-severe weather days. Able to denote initiation along sea-breeze and other convective modes. For use in aviation, it doesn’t matter if storm is “severe” or not.
  • Decline in usability during thick cirrus coverage.
  • Still useful at night, but cloud mask not as robust.  Saw many random objects and cirrus identified. Recommend perhaps removing the lower probabilities overnight.

GOES-R UW-Cloud-top-cooling (CTC):

  • Found running this side-by-side with UAH-CI algorithm to be very (most) useful.  Saw CI product flag 60-80% probabilities, followed by CTC then lightning.
  • Every strong CTC signal observed in MO on Thurs had development to severe. Would like to see some type of climatology study of the product relating it to severe weather (e.g., hail / MESH alg).  Is the CTC signal regionally dependent? Convective regime dependent?

There are more GOES-R feedback details on the GOES-R HWT Blog Weekly Summary.

NOTES FROM THE EWP2012 STAFF:

Forecasters this week have been very interested in real-time availability of many of the experimental products for their offices right now.  We believe this shows the marked improvement in displays and use of many of the products between year 1 (2011) in the testbed and year 2 (2012) and the importance of testing consecutive years with time in between for product development.  The comments above still show some areas of improvement before operational implementation, but this highlights the type of development that can occur through testbed work.

CONTRIBUTORS:

Kristin Calhoun, EWP2012 Week #3 Weekly Coordinator


Tags: None