Purpose and Style

This tutorial is application guide for SST that explains how it can be implemented in the software.

Stochastic Storm Transposition Overview

Stochastic Storm Transposition (SST) is a procedure for expanding available information about the frequency of rainfall extremes by leveraging a large catalog of historical storm events. The catalog samples severe precipitation events from a large, meteorologically homogeneous region around the study watershed called the transposition domain. It is assumed that any storm that occurs within this region can be freely moved via transposition to any other location within the region with equal probability. The region is much larger than the study watershed, which allows storms that did not occur over the study watershed to be transposed there to see their effects. Determining a transposition domain requires expert judgment, although statistically-based procedures that can be used to define a domain in an objective way are under development.

The storm catalog contains all events that exceed "severity criteria" and occurred in the transposition domain, and typically contains many distinct events per year. The severity criteria are used to determine if a storm is severe enough to be included in the storm catalog. An example is "all storms exceeding 4 inches of rainfall accumulation over 72 hours and in an area the size and shape of the study watershed." The criteria are typically chosen so that the catalog contains, on average, 5-15 storm events per year. Ideally, the precipitation data being used will have a long period of record such that the catalog contains hundreds of events. 72 hours is the typical storm duration. The rainfall accumulation is typically measured by maximizing the rainfall over the study watershed by optimizing the placement of the storm. The rainfall the maximized storm would bring to the study watershed is called the "potential precipitation" and storms are ranked by this quantity. A threshold of potential precipitation is selected to achieve a target rate of events per year; for example selecting a threshold that results in 400 storms in a 40 year record, giving an average of 10 storms per year.

Key Differences from the Traditional "Design Storm" Approach

Many flow frequency studies have been developed using point precipitation-frequency derived design storms based on products like NOAA Atlas 14. The major assumption being made in those analyses is that the frequency of the storm is equal to the frequency of the flow. In other words, if I develop a design storm with a depth equal to the 100-year rainfall for my location, the flow that results from applying it to my basin model is also the 100-year flow. It is also assumed that an area-averaged precipitation frequency relationship can be created by using the point frequency product and applying an area reduction factor (ARF) appropriate for the watershed area. Neglecting to use ARF when applying point precipitation-frequency products is one of the most common pitfalls in their application. Traditional design storms also require the modeller to chose appropriate spatial and temporal patterns to turn a total storm depth (or several incremental depths for a range of durations of accumulation) into something that can be used as a boundary condition for a hydrologic model.

SST makes no such assumptions. It attempts to replicate the "natural" processes of precipitation generation in a region and avoids making "engineering choices" or assumptions to take a design criteria and turn it into something that looks like a real storm - because it is a real storm. The two fundamental assumptions are 1) that the transposition domain is homogeneous, and 2) the selected severity criteria will generate the desired frequency response (whether precipitation or flow/stage.) Both assumptions can be tested using sensitivity analysis.

SST Simulation Structure

SST is based on Monte Carlo simulation, a statistical procedure that uses large samples to approximate the behavior of a probability distribution. Each storm and its re-positioning is a sample of the population of possible storm occurrences with an unknown (and unknowable) probability distribution. The sampling procedure is based on a technique called the block bootstrap that re-samples storms from the catalog in their entirety, preserving their space-time structures.

From the bottom up, SST simulations are structured like so:

  • Each event is an equal-chance selection of storm from the catalog with an equal-chance placement of the storm inside the transposition domain
    • The storm's precipitation interaction with the study watershed can be captured
    • Optionally, hydrologic processes can be simulated to capture flood responses
  • Each synthetic year is a number of events that depends on the size of the catalog
    • The number of events is either random (drawn from a distribution) or fixed at the average rate of storms in the catalog
    • Annual maximum statistics are computed as the maximum of all events in the year, whatever the statistic
  • Each realization is a number of synthetic years that determines the quality of estimates for rare frequencies
    • The number of synthetic years should be much larger than the rarest average return interval (ARI) of interest
  • A full uncertainty simulation will have multiple realizations
    • For SST, this process captures Monte Carlo error

HEC-HMS Implementation

HEC-HMS uses a simulation type called the Uncertainty Analysis (UA) to allow for systematic variation of model parameters. In HMS, the Meteorologic Model controls how atmospheric boundary conditions are served up to the basin model. Depending on the processes the user selects for their met model, there are a number of parameters they can control (or can be controlled by the UA). The Gridded Precipitation method for the Precipitation process has a number of controls that allow the user to select which precipitation data to apply, shift it in time, and shift it in space. These features were added by the HMS team to make hypothetical modeling with observed gridded precipitation events easy. The parameters of the Gridded Precipitation method that the UA controls in an SST simulation are the data source (storm event) and its transposition location. The UA has several ways it can vary parameters in all parts of the model, including sampling from probability distribution, reading through a list of values, and so on. Its random sampling scheme allows the user to set an initial seed that ensures that the simulation is repeatable as long as the seed and all other settings remain the same.

Met model parameters are not the only things the UA can control. More frequently it is used to explore hydrologic uncertainty by varying basin model parameters. The section Hydrologic Uncertainty will discuss an approach to using the UA to capture parameter and initial condition uncertainty in conjunction with an SST simulation.

The UA does not have any notion of years or realizations. It has one setting for simulation size which is the total number of samples. This means that the results will need to be post-processed in some manner to make sense of them (see Post Processing the Results).

As of HEC-HMS version 4.13, the UA has been parallelized to take advantage of the independent nature of each sample. This improves computational performance that allows simulations with a large number of samples to be more tractable.

Data Needs for SST

Storm observations come from high-resolution precipitation data with a typical spatial resolution of 1-4 km and temporal resolution of 1 hour. A long period of record (POR) is always helpful, but due to the space-for-time substitution principle, large samples of storm events can be generated even if the POR is much shorter for radar-based precipitation than gage observations. Since SST relies on real observations of rainfall distributions in space, the data must do a good job of representing the time-varying nature of storms in space. Using gage data that has solely been interpolated is not recommended. Radar-based data are the best for this purpose.

Currently, we recommend using the NOAA Analysis of Record for Calibration (AORC) dataset for storm catalog development. The most recent version of the data has 1 km spatial resolution, 1 hour temporal resolution, and a period of record for CONUS of 1979-near present. It is available on AWS.

AORC is a synthesis of a number of sources of precipitation data that allow it to have a long period of record and high resolution. One drawback is that not all sources of the data are available for the whole POR, so there are some temporal inconsistencies. Overall, reviews of AORC's performance are generally favorable, and the FFRD team is treating it as the best available data until something better comes along. We are in a golden age for meteorological data that makes SST feasible.

Transposition Domain

Developing a transposition domain for a watershed is part science and part art. It requires expert judgment and several meteorological datasets to develop. Considerations for what makes up this region include (but are not limited to):

  • Normal annual total precipitation
  • Seasonal variability in precipitation
  • Dewpoint and temperature for convective meteorology
  • Frequency of tropical storm occurrence and proximity to a coastline
  • Elevation, orographically-enhanced rainfall, and blocking

The area should be large enough that a good sample of significant events can be collected, but the larger a domain is, the more likely you are stretching the homogeneity assumption.

The map below shows the transposition domain that was developed for the Duwamish (Green) River project in Washington State. The watershed polygon is shown below in red. PRISM normal annual total precipitation is shown as the base layer and gives a sense of the complexity of rainfall in this region - it is highly influenced by orographics. The watershed's headwaters are at the ridge of the Cascade Range and the river flows down to Elliott Bay in Puget Sound.

This transposition domain had to carefully consider the role of atmospheric rivers and their interactions with the mountains in the region. One challenge in identifying a homogeneous region is that the target watershed may be large enough to contain several different precipitation mechanisms, and and the domain must encompass all these processes in equal measure. The Duwamish watershed is at lower elevation than neighboring drainages and receives less rainfall at its headwaters. It descends into a valley in the Seattle/Tacoma area that experiences rain shadow effects from the Olympic Mountains to the west. The resulting domain was drawn to exclude storms that occurred in the Coast Range or the OIympic Mountains to the west, and anything on the leeward side of the Cascades. It avoids the regions further south in Oregon where the normal rainfall in the Cascades drops off due to the Klamath Mountains robbing the moisture before it can make it too far inland. The northern boundary passes through the Vancouver, BC metro area and stops just before the rise of the North Shore Mountains. It also reflects the northernmost boundary of the precipitation dataset.

Work is underway by the Hydroclimate Extremes Research Group at University of Wisconsin to develop a repeatable, rapid data-driven statistical approach to defining the transposition domain.

Computing Storm Potential Precipitation

Storm potential precipitation is computed using an optimization routine that performs geographic operations on a polygon and then a zonal statistic computation. This procedure is used by the RainyDay software developed by the University of Wisconsin, and the StormHub software created by Dewberry. For each independent storm event in the record, the optimization routine transposes the watershed polygon over the storm's accumulated precipitation field and computes the average of that field over the polygon at every step. It is searching for the global maximum of the average precipitation over the polygon. The global maximum is the storm's potential precipitation for the watershed and is used to determine if it will be included in the catalog.

Creating Storm Catalogs

The major barrier to applying the SST method is having tools that handle the data processing part of the problem. Work is underway at USACE to deploy a version of StormHub to CWBI to allow USACE modelers to generate their own storm catalogs.

The image below shows a snapshot of this search process.

  • Background raster is the 72-hour accumulation for an event beginning on 2015-12-07.
  • Precipitation data are clipped to the extent of the transposition domain.
  • Light grey polygon is the real location of the Duwamish watershed.
  • Black polygon is the position of the watershed polygon when maximized for area-averaged precipitation.
    • The polygon was moved around the transposition domain by a numerical optimization routine.
    • The polygon is not allowed to "leave" the transposition domain.
  • Resulting potential precipitation is 15.39 inches.

Data Format

The preferred data format for the storms data is HEC-DSS. Tools such as the hec-dss-python project can be used for manipulating HEC-DSS files. Each storm should be in its own DSS file with a logical and consistent name. A typical convention is $data-source $event-date $rank , e.g. AORC 2015-12-07 T001 . Other metadata may be added to the filename, such as the type of storm (see https://agu.confex.com/agu/agu24/meetingapp.cgi/Paper/1594675).

The image below shows one storm event in HEC-DSSVue. Each entry in the catalog shown below is an hourly grid.

  • The A part indicates that the resolution and projection of these grids are 4 km using the SHG grid.
    • Note that this does not actually control the grid's resolution and projection, it is included for bookkeeping.
  • The B part is used to identify which project, watershed, or catalog the storm belongs to.
  • The C part should always be PRECIPITATION
  • The D part is the start of the precipitation recording period
  • The E part is the end of the precipitation recording period
  • The F part can be free-form but is typically used to indicate the source of the precipitation data for bookkeeping.
  • The file's name indicates the precipitation data source, the date when the storm begins, and that it is the largest event (rank 001) above a selected threshold (T) 

Below are three views available in HEC-DSSVue of one of the hourly grids from a storm.


Data Formats

HEC-HMS is capable of using CF-compliant NetCDF format files for boundary conditions as well but have not been used in any SST studies as of yet.

Setting SST Up in HEC-HMS

Basin Model

As SST is based on gridded precipitation data, subbasins in your basin model need a Discretization method. The Discretization process breaks a subbasin into smaller units for, among other things, intersecting it with rainfall data. Use the "Structured" method with the SHG projection. A cell size similar to the cell size of the precipitation data is typically recommended. The most common "default" value for HMS is the SHG2k grid. 

Basin Model Best Practices

  • We recommend using the following methods for your processes in the basin model (any discussion of hydrologic parameters that follow will assume you are using the same methods):
    • Subbasin Loss: Deficit and Constant
    • Subbasin Transform: ModClark
    • Subbasin Baseflow: Linear Reservoir
    • Subbasin Canopy: Simple Canopy (only if ET is necessary)
    • Reach Routing: Muskingum-Cunge
    • Reservoir method: Outflow Curves, Outflow Structures, or Rule-Based
      • If you wish to simulate naturalized/unregulated conditions, you can switch a reservoir to run-of-river by choosing the "--None--" method.
  • If you are basing the simulations on an event calibration, create a copy of that basin model for the SST simulations because you will want to adjust a few things that you might need to keep in your calibrated event simulation basin model.
  • Unlink all observed data (flow, stage, SWE, etc.) from all elements by selecting "--None--" for each observed time-series on the element's Options tab.
    • This will prevent HMS from spending time writing all the observed data and residual time-series to the output DSS file.
    • The observed data are meaningless for the stochastic simulations anyway.
  • If your model contains reservoir elements, use a reservoir modeling method that does not rely on observed time-series data (i.e. NOT the Specified Release method). The Ouflow Curve method is simple and can model a wide variety of reservoir types and simple operations.
  • Avoid using Source elements that are dependent on observed time-series data that will not apply during the stochastic simulations.

Goal: Flow or Stage Frequency

Hydrologic processes in the basin model are necessary for estimating flow/stage frequency relationships at an element. The basin model you use will be similar to what is used in typical design storm applications. The result can be very sensitive to initial conditions, especially for the Loss process in subbasins. It is difficult to choose a single initial soil storage quantity that makes sense across the full frequency range and should be treated with uncertainty. See section Hydrologic Uncertainty below.

Make sure you select any elements where you desire results as an Analysis Point in the Uncertainty Analysis. Select the Outflow timeseries to produce a summary of flow at that location.

Flow/Stage Frequency Best Practices

  • SST will apply a range of precipitation magnitudes to the basin model. Consider using the Variable ModClark method for your Transform process to better capture the response variability due to changing excess precipitation.
  • Finding a set of initial conditions for basin model processes that will work across a range of precipitation events can be challenging. Consider treating the initial conditions as uncertain.
  • Calibrate and validate your model to a number of observed events to get a sense of the variability in calibration parameters. Use this exercise to inform the parameters you use in the stochastic simulations.
  • Simplify processes in your model where you can. If you can turn off a process in your model and still get a good result, consider doing it. The model will be run thousands of times and every second counts.
    • Consider whether you really need processes like Surface, Canopy and Evapotranspiration, etc.
    • If you are running simplified methods, ensure you do not have any extraneous meteorological processes turned on.
      • For example, if you are using the Gridded Precipitation and Hamon Evapotranspiration methods, you only need precipitation and temperature in the met model.
      • Most of the time, you do not need other met processes such as shortwave and longwave radiation.

Goal: Watershed Averaged Precipitation Frequency

Often we are interested in looking at the area-averaged precipitation frequency for a watershed. For these kinds of analyses, we do not need any hydrologic processes turned on - all we need is a Discretization method. This has the benefit of speeding up some of the simulation. Note: HMS will still produce results for runoff from subbasins even with hydrologic processes turned off, but it is assuming all precipitation becomes direct runoff and is instantly routed to the outlet (in other words, they are bogus results).

If you want to look at the precipitation-frequency response averaged over the entire study watershed, you can use a single subbasin to represent it. If you are starting with a model you delineated in HMS that appears like the basin model in the image on the left below, you can use tools in the GIS menu to merge the subbasins together into one large subbasin representing the entire watershed.

Make sure you have the Precipitation time-series selected for the Subbasin element in the Results setting for the Uncertainty Analysis.

Watershed-Averaged Precipitation Frequency Best Practices

  • Make a copy of an existing basin model that will be modified for running the precipitation-only simulations so you don't lose the information in the original model.
  • If your goal is to compute the watershed-averaged precipitation for the entire watershed, consider re-delineating your project as a single subbasin to the outlet point instead of merging together your subbasins.
    • Alternatively, you can import and georeference an external source delineation polygon (such as the USGS WBD).
  • Turn off all unnecessary processes in your basin model. HMS is perfectly happy to use a basin model with one subbasin that only has a Discretization method specified for the purposes of intersecting rainfall with a polygon.

Meteorologic Model

The Meteorologic Model needs to use the Gridded Precipitation method for the Precipitation process. Most other processes are not needed for these event simulations. For now we are considering the case where we are only working with precipitation.

  • First, ensure that your meteorologic model is set up so that it has the same unit system as the coordinates of your basin model. For SHG with coordinates in meters, set the Unit System to Metric. HMS will handle all the unit conversions for your data in the background - you can still use precipitation data in USCS (inches).
    • This is a peculiarity of the way the storm transposition coordinates are defined. They are treated as a parameter with length units defined by the unit system. They are unaware of the basin model's defined coordinate system.
    • Coordinate data must be in a projected coordinate system. SHG is based on NAD83 CONUS Albers (EPSG:5070) which has planar coordinates in meters.
  • Your Precipitation method should be Gridded Precipitation.
  • At the bottom, use the setting "Set to Default" for the Replace Missing option. This will ensure the model does not abort if precipitation does not intersect your watershed (which happens a lot during an SST simulation). Instead, it replaces missing data with zeroes which is appropriate for this usage.

Two settings in the Gridded Precipitation method are crucial for the SST Meteorologic Model: Time Shift, and Transpose. Choose any grid in your project as the Grid Name to use in this editor - the Uncertainty Analysis will override this (see below).

  • The Time Shift Method "snaps" the storm's start time to the start of the simulation and ensures that the storm occurs during the simulation time window set by the Uncertainty Analysis (see below).
    • Without this, unless the simulation time window aligns with the storm data, no precipitation will be applied.
  • Setting "Transpose" to "Yes" allows the storm to be moved from its observed location to a new location. After turning that option on, the X Coordinate and Y Coordinate options will appear. I typically use a coordinate near the middle of the study watershed for convenience, but the Uncertainty Analysis will override this (see below). If you hover your cursor over the Basin Model Map, a tooltip with the coordinates in the basin model's coordinate system will appear.
    • Example tooltip:
  • Only use the Bias Grid option if you are doing Normalized Transposition.

Normalized Transposition

The Bias Grid setting in the Gridded Precipitation method allows you to use a procedure called Normalized Transposition which is an area of active research.

Shared Data Components

Gridded Precipitation

Each event in the catalog is a Precipitation Gridset in HEC-HMS.

Some best practices for these storm datasets:

  • Each event should be the same duration (e.g. 72 hours)
  • Each event should be in its own DSS data file and follow a naming convention
  • Store the DSS files in the project's data folder
  • The Storm Center coordinates should be in the same coordinate system as the basin model

It is required to either import these gridsets into the HMS project, or use a scripting solution to manually edit the HMS ".grid" file. The image below shows a project that has a storm catalog with 440 events. At this time, manually importing each storm event into the project would be incredibly cumbersome. The current best practice is to use scripting (for example, Python) to generate each entry in the .grid file so that upon opening the HMS project each event is loaded.

Editing HMS Files

Always, ALWAYS make a backup of any HMS file you plan to edit. Corrupted HMS input files can destroy a project.

Best practice is to make an archive version of your entire project (using a zip file or other compressed format locally) before manually editing input files so you have a restore point in case anything goes wrong. This is generally good modeling practice even if you aren't manually editing files.

Precipitation Gridset Format in .grid File

Each grid entry in the HMS .grid file follows a standard plaintext format. Below is an example of the entry that corresponds to the information in the Grid Data Component Editor shown above. The indents are five spaces. Each event/gridset is separated by a blank line.

Grid: AORC 2021-11-26 T154

     Grid: AORC 2021-11-26 T154

     Grid Type: Precipitation

     Storm Center X: -1911284.3532963141

     Storm Center Y: 3118223.49692294

     Data Source Type: External DSS

     Filename: data\AORC_2021_11_26_T154.dss

     Pathname: /SHG4K/DUWAMISH/PRECIPITATION/26NOV2021:0000/26NOV2021:0100/AORC/

End: 

Storm Names

Storm names are used by the Uncertainty Analysis to sample storm events from the catalog. They are represented by a Parameter Value Sample.

To view existing and create new Parameter Value Samples, use the Paired Data Manager in the Components menu. Create new ones using the "New..." button. Create one for Storm Names.

Then, in the Watershed Explorer, navigate to the Paired Data > Parameter Value Samples folder, and select the Storm Names paired data.

  • Set the Data Source to Manual Entry
  • Set the Category to Precipitation
  • Set the Method to Gridded Precipitation
  • Set the Parameter to Grid Name

Next, switch to the Table tab in the Paired Data Component Editor. Here, you can paste a list of the names of the storms you wish to sample from. They must correspond to the names of the precipitation gridsets in your project. One way to get this list is to use a script to comb through the grids in the HMS .grid file, pull their names, and write them out to a text file.

The SSTUtilities repo on HEC GitHub has a script called ExtractFromGridFile.py that has functions that make this process easy. The function extract_grid_names() produces a plaintext list of the names of all the grids in an HMS .grid file. Used in conjunction with the reduce_to_sst_storms() which produces a version of a project's .grid file that only has SST storms in it, you can get the storm sample list in a way that guarantees it matches the names of the storms in the project.

Storm Center Coordinates

Storm coordinates are used by the Uncertainty Analysis to sample the position of the transposed storms within the transposition domain. The X and Y coordinates each need their own Parameter Value Samples. Create one in the same manner you did for Storm Names.

Generating Storm Center Coordinates

The current best practice is to generate a large sample of points within the transposition domain using an external tool such as a GIS. See Generating Coordinates using QGIS for a brief guide.

Uncertainty Analysis

Create a new Uncertainty Analysis through the Compute menu by going to Create Compute > Uncertainty Analysis... (or by using the Uncertainty Analysis Manager).

The naming convention I use for the analysis is SST + $iterations + $normalization + $realization . For example, the image below shows the name is SST 10k Unnorm 1 . This is a 10,000 sample simulation that is not using normalization. It is the first realization of the simulation.

The image below shows an Uncertainty Analysis Component Editor once its setup is complete.

Simulation Settings

Analysis Points

  • Select the gear icon on the right side of the Uncertainty Analysis component editor. A "Results" table will appear.
  • If you are using this project to compute watershed-averaged precipitation frequency, select the Precipitation time-series for any subbasins where you want this result using the checkbox on the left side.
  • If you are producing estimates of flow frequency, select the Outflow time-series for any elements (subbasins, junctions, reservoirs, etc.) where you want this result.
    • If you have elements that produce stage estimates (e.g. reservoirs) you can select stage time-series as well.
  • The output interval should be equal to the model timestep (usually 1 hour for these applications.)
  • Note: your project may have specific needs that differ from the above recommendations.
  • After selecting, select Save then Close.

Basin Model

Meteorologic Model

Time Window and Time Interval

  • Ensure the time window is at least as long as the duration of the storm events. If you are doing flow/stage frequency analysis and the watershed is large and has considerable lag time, buffer the duration as appropriate.
    • Even if the storm has concluded and no more rainfall is being applied, the additional buffer allows hydrographs to recede.
  • I use a "fake" date for the simulation as a way to indicate whenever looking at the data that they are synthetically generated and not representations of "real" events.
    • The example above uses 01Jan2100 as the event start date.
    • The "Normalize Start" setting in the met model will automatically move each event's start to match the start time of the simulation. Without this setting on, the met model will look into the gridset for whatever data is available for 01Jan2100.
  • Use a time interval appropriate for the problem you are trying to solve. Most of the time, setting this to the same timestep as your precipitation data's temporal resolution is the right approach.
    • Using too short or too long of a timestep induce different kinds of issues - it's best to try to link it to the precipitation's temporal resolution.


Checking Time Window Length for Flow Results

If you run an SST simulation in the UA with a smaller number of events (100 may be enough) and output flow at the most downstream junction or sink in the basin model, you can check if the time window is long enough. You should be able to see the summary hydrograph recede. If the summary hydrographs are cut off, you should lengthen the time window.

Here is an example of a result that interrupts the summary hydrograph and needs a longer time window:

This is the same model with a lengthened time window:

You can see that despite the 72-hour rainfall duration, the hydrographs need at least 4 days to guarantee the peak is captured.

Number of Simulations

  • The minimum number of simulations to run is dependent on the rarest average recurrence interval (ARI) you wish to estimate and the number of events per year in your catalog.
    • At an absolute minimum, to estimate the 100-year flow/stage/precipitation, you need 100 * the rate of events in your catalog.
    • If you average 10 events per year in your catalog, the absolute floor for simulations estimating the 100-year ARI is 1,000.
    • Using this minimum guideline will result in significant Monte Carlo error in your estimate of the rarest ARI events.
    • Various guidelines exist for the minimum number of simulations in this case, but increasing the minimum value by a factor of 5-10 times is a good start.
  • Depending on how you want to post-process the results (see the section below) you may choose to a fixed storm count for each year, or allow it to vary for each year in the simulation.
    • The simplest approach is to assume that each synthetic year of the simulation has N events, where N is the average rate of storms in the catalog.
    • Simulation experiments have shown that using the constant average rate of events produces the same result as using a varying rate of storms (in the same manner they occur in the catalog).
    • Using the constant value also avoids the small possibility of getting 0 events in a year which can occur when the count is allowed to vary
      • The probability of getting a count of 0 with a mean rate of 10 is 4.54e-5

Seed Value

  • The seed value allows randomized simulations to be repeatable. HMS provides a default seed value that you probably don't need to change.
    • Two uncertainty analyses with all the same settings and the same seed will produce the same results.
    • If you want to generate your own seed the same way HMS does, you can use a tool like https://currentmillis.com/

Setting Up the Parameters

To add a parameter to the Uncertainty Analysis, right click on the Uncertainty Analysis while on the Compute tab of the Watershed Explorer, and select Add Parameter. For the meteorological part of SST you will need at least three parameters. Any simulation of hydrologic process uncertainty will require more.

Storm Names

The settings for Storm Names are shown below. Note the sampler type is "Specified Values - Random (Independently Random)". This means that it will randomly pull a storm name from the list (with replacement) and do so without trying to keep it in the same order as any other variable. The selected Parameter Value is the Parameter Value Samples you created earlier.

Storm Coordinates

The settings for the X Coordinate are shown below. The settings should be the same for the Y Coordinate (swapping in Y Coordinate for the Parameter, and the appropriate Parameter Value Samples for the Parameter Value). Note the sampler type is "Specified Values - Sequential Loop". Because the points you generated were in a random order, the sampler can go through these lists in order. This achieves two things: 1) a random order determined by the point generation, and 2) ensuring that each X and Y coordinate stays together producing a valid coordinate.

Post-Processing the Results

Most of the time, we are interested in AEP-based quantiles for our results. The simulation structure used in SST doesn't make any assumptions about event frequency or annual maxima, and post-processing the results is required to get these kinds of estimates. One major upside to this approach is that you are not forced to assume one event is the annual maximum event for all locations in a watershed at the same time.

For all the target variables of interest, you need to make sure you have selected the appropriate time-series as an analysis point. The HMS Uncertainty Analysis only saves the data that you specify in this setting because it is generating heaps of data when you run thousands of the model. You can specify multiple elements/locations of interest and multiple time-series. For each time-series you can create a frequency curve.

To compute a frequency curve for a location of interest, you will need to extract the annual maximum from the Uncertainty Analysis results by creating "blocks" and computing the maxima. HEC-HMS outputs are stored in a DSS file named after the Uncertainty Analysis you created.

The DSS path for the simulation results for your analysis location is //Basin Name/Realization-Variable///MCA:Uncertainty Analysis Name/ for example //Duwamish/Realization-Precipitation Total//MCA:SST 10k Norm 1/ . This is a paired data object and it indicates the realization number and the value of the variable pulled from the simulation.

To extract a frequency curve from one of these entries:

  • Divide up the results into non-overlapping blocks of size N, where N is the average rate of events per year.
    • If you used a variable count of storms per year, follow the counts you used as the block size. (Note: the rest of these instructions assume you use the constant rate.)
  • For each block of size N, keep the largest result for that block. Treat this as the annual maximum for the synthetic year.
  • Plot/analyze the annual maxima as you normally would

Every element you do do this for can be done independently. There is no assumption that one event creates the annual maximum everywhere. For example, in synthetic year 1, the 2nd event might cause the annual maximum flow at Location A, and the 8th event might cause it at Location B. This is one of the strengths of SST.

This post-processing script contains functions for reducing an HMS UA SST run's results down to an AMS for precipitation data for a specific element in the model (a subbasin). It can then be used to write that AMS out to a csv file to be used in other applications. The extract_precipitation_results() function gets the correct by-realization precipitation paired data containing the simulation results from an HMS project given a UA name, the project's path, and the element (DSS B-part) where the result is desired. The extract_annual_maximum_precipitation() function can use the results of the prior function to extract an AMS. It also needs the mean rate of storms per year from the catalog and the number of years. The paired data should have as many entries as the events per year times the number of years. Finally, the write_ams_to_csv() function can take that AMS dataframe and write it out to a csv in the HMS project's data folder.

For flow at elements like junctions and subbasins, use the extract_flow_results() and extract_annual_maximum_flow() functions in the post-processing script.

These functions can be used in a loop to iterate over all the elements where you want to extract an annual maximum series (for example, a list of junctions.)

Uncertainty and Multiple Realizations

The current SST process only quantifies natural variability. We are not explicitly treating any variables as knowledge uncertainty. This assumes the sample and population are the same. However, due to the large size of the catalog and the limitless number of placements for each storm, any one realization has significant sample (Monte Carlo) error. By running multiple realizations of the sample process we can quantify this error and generate an estimate for the posterior variance for quantities of interest. Each realization has the exact same model setup, but the random seed is changed so that the sequence of random numbers is different. A collection of realizations then reflects the sample error instead of knowledge uncertainty in this context.

Hydrologic Uncertainty

There are two key sources of hydrologic uncertainty in these types of hypothetical simulations: parameter uncertainty and initial condition uncertainty. Parameter uncertainty arises due to natural variability in parameters across events or error induced by the way the model process simplifies reality. Initial condition uncertainty occurs because the watershed conditions are not always the same prior to a heavy rainfall event. This guide: Hydrologic Uncertainty with SST discusses a method for handling both types of uncertainty in an SST simulation for flow or stage frequency.

Selected References