Download PDF
Download page Basic Concepts.
Basic Concepts
Uncertainty Analysis
Uncertainty Analysis is the process of determining the total error in the simulated watershed response, for example, the flow at the outlet. The simulated flow at the outlet actually depends on many individual components each with its own error. There is error in the meteorologic data and observed flow data because it is generally impossible to perfectly measure precipitation and discharge. There is error in the models of the hydrologic processes because it is generally impossible to include every possible process at the scale it occurs, for example, animal burrows or plant transpiration. There is error in the model parameter values because the equations are solved at a scale ranging from meters up to whole subbasins and area-average values must be used. The error in the whole watershed response includes all of these individual errors and the complex way in which they interact.
The watershed model is very complex and it may not be possible to attribute the total error to individual components. An error in the precipitation data may be compensated with a corresponding error in the infiltration parameter values. An error in the mathematical formulation of the infiltration or transpiration process may be compensated with a corresponding error in the parameter values. An assumption about process scale may lead to effective area-averaged parameter values that do not match values that would be measured in the field. It is usually not possible to determine the exact error in each component or how the errors are interacting and accumulating throughout the watershed model. The error in an individual model parameter may be described with a probability distribution. In one case the selected process model is well-suited to the watershed, the input to the model is accurate, and the parameters can be estimated from field observations. In this first case there is very little uncertainty in the parameter value. In another case the model is missing important subtleties of the physical process, the input is poor, and parameter estimation is difficult. In this second case there is a high degree of uncertainty in the parameter value. In both cases, a probability distribution can be parameterized to reflect the uncertainty in the parameter, either small or large. Different model parameters require differently formulated probability distributions and they may change from one watershed to another or from one historical time period to another. The Uncertainty Analysis in HEC-HMS can be used to describe the error in the basin model parameters and the input meteorology.
There are multiple uncertainty techniques available, such as Monte Carlo, multi-model approaches, Bayesian statistics, multi-objective analysis (Moges et. al). The approach utilized in HEC-HMS is the Monte Carlo method.
Monte Carlo
The Monte Carlo Method is one approach to estimating the uncertainty in the simulated watershed response given the uncertainty in each of the model parameters. Monte Carlo sampling is a statistical technique used in hydrology (and many other fields) to model and analyze the uncertainty and variability of complex systems. The Monte Carlo Method within HEC-HMS works using an automated sampling procedure. Each sample is created by sampling the model parameters according to their individual probability distribution. Each sample is simulated to obtain a watershed response corresponding to the sampled parameter values. All of the responses from all of the samples can be analyzed statistically to evaluate the uncertainty in the simulated watershed response.
In HEC-HMS, sample size is a required input. Sample size refers to the number of random samples generated from the probability distributions to perform the Monte Carlo simulations. The larger the sample size, the aggregate of the samples will more accurately represent the parameterized distribution. As the sample size increases, the reliability of the Monte Carlo simulation improves, resulting in more stable and precise statistics, such as the mean. These sample statistics progressively converge to the true population parameters. However, this comes at the cost of increased computational resources. Small sample sizes, on the other hand, may not fully capture the characteristics of the distribution which can lead to less accurate or misleading simulation results.