Version 2.0
Introduction
HEC-FDA Version 2.0 is a modern overhaul of the HEC-FDA software and will be the first major release of the HEC-FDA software program in more than a decade. HEC-FDA Version 2.0 is an open-source software program built with Microsoft's .NET software development kit and is largely written in the C# programming language.
The software program was developed with several important features that enhance the stability of the software over the long-term. Some examples include:
- Model-View-ViewModel Framework. This software architecture completely separates the computational engines from the user interface code making software maintenance easier and safer. Read more about the framework in Microsoft's documentation.
- Open Source. Anyone with internet access can inspect the codebase, evaluate what takes place under the hood, and even submit requests for code changes. The public repository can be accessed on GitHub.
- DevOps Procedures. The team relies on the git flow branching strategy to ensure that the software is in a constantly deliverable state, automated testing upon every code change to prevent unintended consequences, and version control for secure development. Our contributing guide documents these and other software development standards that the HEC-FDA team follows. Read more about HEC's adoption of DevOps here.
- Team Collaboration. Multiple people can effectively operate in all classes in this codebase and more people are being onboarded. HEC-FDA will not be exposed to the single point of failure (key person risk) that takes place through reliance on a single person to maintain and develop a codebase or specific parts of a codebase. Review contributor activity on GitHub.
There are two computational engines in the HEC-FDA software program: a risk engine and a consequences engine. The risk engine in HEC-FDA Version 2.0 matches the risk engine in HEC-FDA Version 1.4.3 very closely while the consequences engine in HEC-FDA Version 2.0 has been modernized. The consequences engine produces aggregated stage-damage functions with uncertainty. The calculation of these functions now relies on georeferenced hydraulic modeling - text files with stages referenced to river stations will no longer be supported. HEC-FDA uses the RasMapper library to read georeferenced hydraulic modeling, being able to read the dynamic results stored in HDF. The way that HEC-FDA Version 2.0 computes consequences has been improved from HEC-FDA Version 1.4.3 in several ways. The set of these changes has shown to have a moderate effect on mean EAD estimates. In most cases, the difference in mean EAD estimates contributable to the improvements to the consequence engine is less than 10%. Those improvements are described below. Different studies will be impacted differently. Examples of the resulting impact to results can be examined under Case Studies.
This new version of the HEC-FDA software comes with a completely new user interface that provides for modern Microsoft software functionality. That's right folks, you can now copy and paste. The image below of the create frequency function editor displays a typical table with plot editor which has complete copy and paste functionality and capabilities to inspect plotted data. In the below image with results for Muncie and a blue histogram, system performance statistics are displayed for a given scenario compute. The user interface includes a study tree on the left-hand side and editor and results windows on the right-hand side. To learn more, navigate to HEC-FDA User Interface in our user documentation.
User Resources
HEC-FDA Version 2.0 is being released alongside a wealth of user documentation and other resources:
- HEC-FDA Quick Start Guides with video tutorials to get you up and running quickly.
- Version 1.4.3 to Version 2.0 Study Conversion tutorial for easy conversion reference.
- HEC-FDA User Manual for thorough descriptions of software functionality.
- HEC-FDA Technical Reference for detailed information about how the computational engine works.
- PROSPECT #209 Course Materials Using HEC-FDA Version 2.0 Software. We recorded lectures and hands-on workshops.
- Discourse FDA user group for issue reporting, troubleshooting questions, and opportunities to share lessons learned.
New Computational Functionality
Geospatial Processing
HEC-FDA Version 2.0 uses HEC-RAS Mapper Software Libraries for processing geospatial data, including terrain, hydraulics modeling, impact area sets, and structure inventories.
Uniform Uncertainty Distribution
Users can now parametrize uncertainty using a uniform distribution, which is specified using a minimum value and a maximum value. For a random number (non-exceedance probability) p, the inverse CDF of a uniform distribution is calculated as \text{Min} + p \times \left(\text{Max} - \text{Min}\right). Uniform uncertainty can be applied for summary relationship uncertainty, such as the uncertainty in stage for a given discharge or the uncertainty in regulated flow for a given unregulated flow. Uniform uncertainty can also be applied to economic uncertainty, such as the uncertainty in percent damage for a given depth above the first floor elevation, or the uncertainty in the content-to-structure value ratio. The use of this distribution pushes more uncertainty into the tails, and is appropriate when there are obvious bounds within which any value is equally likely.
Uncertainty About Content, Other, and Vehicle Value
In HEC-FDA Version 1.4.3, uncertainty about content (other) value was only possible to specify if a content (other)-to-structure value ratio was used. The user could not specify uncertainty about content value, other value, or vehicle value if the values were included in the structure inventory. In HEC-FDA Version 2.0, the user can optionally specify the uncertainty about content, other, and vehicle value in the same way as they enter uncertainty about structure value - as a measure of variation relative to the value in the inventory. See Occupancy Types in the HEC-FDA User Manual for more information. This is a source of uncertainty that was missed in HEC-FDA Version 1.4.3; the introduction of this uncertainty will appropriately add to the uncertainty in consequences.
Compute Without Aggregated Stage-Damage Functions
Users can now compute and obtain system performance statistics without specifying aggregated stage-damage functions. This functionality is appropriate when consequences and risk are the focus, the only results of interest are annual exceedance probability, long-term exceedance probability, and/or assurance.
Total Risk
When calculating flood risk behind a levee, the software now has the capability to apply total probability theory to failure scenario damages and non–failure scenario damages to calculate total risk, where total risk is the sum of failure risk and non-failure risk. The probabilistic combination is defined by the system response curve. The use of this functionality is appropriate when significant flood damage is expected in a leveed floodplain despite the levee performing successfully. An example of this scenario is consequences that result from a levee that is flanked before it is overtopped. Another example is consequences that result from overtopping without levee failure. Separate hydraulics data sets will be required to complete the total risk modeling: failure scenario hydraulics and non-failure scenario hydraulics. Hydrologic and hydraulic summary relationships for the leveed impact area should reflect the non-failure scenario because the non-failure scenario hazard curve reflects the trigger condition. For each stage in the river, which has an annual exceedance probability, we have the probability of levee failure, and there can be consequences if the levee fails, and there can be consequences even if the levee doesn't fail, so the two different scenarios are tied to the same non-failure stage-frequency relationship.
System Response Function
System response functions can now be defined with a probability of failure of less than 1 above the top of levee elevation so that users can model risk where safe overtopping is part of the design.
Methodological Changes
Changes to the Risk Engine
There is only one change to the risk engine: the method for quantifying uncertainty about analytical discharge frequency functions has been changed for consistency with USGS Bulletin 17C. Case studies suggest that the effect of the change on mean expected annual damage estimates is marginal. See the examples in the section under Mean EAD of Converted Studies.
Uncertainty About Analytical Discharge-Frequency Functions
Analytical discharge-frequency functions represent the relationship between annual maximum streamflow values and the values' probability of being exceeded. In other words, a discharge-frequency function is an inverse cumulative distribution function of annual maximum streamflow values. This relationship is often derived from the statistical properties (mean, standard deviation, and skew) of an annual maximum streamflow record. A streamflow record represents an incomplete sample and therein a fuzzy picture of the range of possible flow values. The length of this record (a.k.a. period of record or sample size) is used to parameterize the uncertainty about the discharge-frequency function. Why? The smaller the sample, the fuzzier is the picture, and the more uncertainty there is about the true range of possible flow values and therein the true inverse cumulative distribution function.
The methods used to compute uncertainty about analytical discharge-frequency functions in HEC-FDA Version 1.4.3 are based on the guidelines for determining flood flow frequency described in USGS Bulletin 17B. The USGS Bulletin 17B method incorporates uncertainty about the mean and standard deviation of the distribution, but not skew. The guidelines for determining flood flow frequency have been since updated, and are documented in USGS Bulletin 17C. The USGS Bulletin 17C approach allows for uncertainty in skew in addition to the mean and standard deviation.
HEC-FDA Version 2.0 relies on a new method for calculating the uncertainty about an analytical discharge-frequency function that is consistent with the USGS Bulletin 17C approach. In HEC-FDA Version 2.0, a user specifies an analytical discharge-frequency function (the median function) as a Log Pearson Type III (LP3) distribution based on the mean, standard deviation, and skew of the original period of record data, as well as the record length. The implementation for calculating the uncertainty about that function then involves fitting a new discharge-frequency function to a bootstrapped sample of flow values drawn from the user-entered Log Pearson Type III (LP3) distribution. This procedure occurs once within each iteration (realization) of a compute, and has the effect of allowing the mean, standard deviation, and skew of the discharge-frequency distribution to shift from one iteration to the next. Because the skewness in these functions is driven in part by the shape of the discharge-frequency function, the uncertainty about stage- and damage-frequency functions is expected to increase if using an analytical flow-frequency function, as is uncertainty in expected annual damage (EAD) and annual exceedance probability (AEP), results from said functions. Moreover, because damaging events tend only to occur along the right tail of the discharge-frequency function (during high flow conditions), the shape of the damage-frequency function is especially sensitive to the skew of the distribution, adding to the uncertainty about EAD. The added uncertainty from the application of the new approach has had a marginal effect on mean EAD in case studies. See the examples in the section under Mean EAD of Converted Studies.
An example of the difference in flow uncertainty between versions is provided in the illustration below. Annual maximum flow is on the y axis and annual exceedance probability is on the x axis. The solid black line is the input LP3 function. The 5th, 25th, 75th, and 95th percentiles are plotted for Versions 1.4.3 in orange and 2.0 in blue. Observe that the blue lines for Version 2.0 lie mostly outside the orange lines for Version 1.4.3 as expected.
Convergence will be impacted by the new implementation. Convergence depends on the distribution of EAD and on the distribution of stages collected for the calculation of assurance of the target stage. Because these distributions are expected to be more widely spread, HEC-FDA Version 2.0 will typically reach convergence more slowly than HEC-FDA Version 1.4.3, all else being equal.
Changes to the Consequences Engine
The way that HEC-FDA Version 2.0 computes consequences has been improved from HEC-FDA Version 1.4.3 in several ways. The set of these changes has shown to have a moderate effect on mean EAD estimates. In most cases, the difference in mean EAD estimates contributable to the improvements to the consequence engine is less than 10%. Those improvements are described below. Different studies will be impacted differently. Examples of the resulting impact to results can be examined under Case Studies.
Empirical Damage Uncertainty
A stage-damage function consists of an array of stages and an array of damage distributions. In HEC-FDA Version 1.4.3, the resulting damage distributions were normal distributions. A sample of damages was collected for each stage, the mean and standard deviation of the sample were calculated, and a normal distribution was fit to the sample. See the HEC-FDA User's Manual Appendix E for more information on the stage-damage compute algorithm. In HEC-FDA Version 2.0, no distributional assumption is forced upon the resulting sample. Instead, the empirical distribution (a histogram) of damage is collected. The resulting sample is often skewed and wider/flatter than the normal distribution that 1.4.3 would force. A stage-damage function with uncertainty is then an array of stages and an array of empirical distributions - one empirical distribution of damage for each stage. During the EAD compute, the empirical distribution is sampled directly, resulting in a better answer that could be quite different from the answer provided through the assumption of normality. An example internally computed aggregated stage-damage function from the London Orleans Case Study data is included below. Observe that the strong asymmetry displayed in the computed empirical functions - a result that was not possible to capture in HEC-FDA Version 1.4.3. This change could produce a marginal increase or marginal decrease in mean EAD, depending on the nature of the resulting asymmetry.
Assumption Regarding Correlation of Economic Error
In HEC-FDA Version 1.4.3, the stage-damage algorithm assumed that the economic error for a given structure is independent of the economic error for all surrounding structures, an unrealistic assumption. While the truth exists somewhere between perfectly independent and perfectly correlated, an assumption of one or the other is required given the current information content demand. We chose to implement the stage-damage Monte Carlo in a way that assumes perfect correlation of economic error for a given occupancy type in a given impact area. This means that if a given single story residential structure without a basement experiences a relatively high percent damage for a given depth, so will all of the surrounding single story residential structures without basements. Case studies suggest that this assumption has little impact to results.
Aggregation Stages for Internally-Computed Stage-Damage Functions
In HEC-FDA Version 1.4.3, the aggregation stages are identified using stages drawn from the water surface profiles for the index location river station, in combination with the hydrologic and hydraulic summary relationships with uncertainty. HEC-FDA Version 1.4.3 then laid out "nice" stages that reflected the appropriate range of stages, typically at half foot intervals and calculated damage at the structures for the nice stages in the river. In HEC-FDA Version 2.0, the stage-damage algorithm uses the hydrologic and hydraulic summary relationships alone to define the aggregation stages, and damage is calculated at the structures for the stages in the stage-frequency function that results from the combination of the input hydrologic and hydraulic summary relationships. Removing water surface profiles from the equation does not impact results, it simply removes a degree of freedom. The summary relationships and water surface profiles have always needed to match and reflect the exact same hydrologic and hydraulic conditions and assumptions. However, not using nice stages and instead using stages that match the resulting stage-frequency function is an improvement that can impact results marginally. The Glendive case study is a good example of a situation where the improvement matters for results.
Assumed Water Surface Elevation at Structures
HEC-FDA Version 2.0 does not calculate assumed water surface elevation at structures. Instead, HEC-FDA Version 2.0 uses RasMapper geospatial processing software libraries to read the modeled water surface elevation at structure locations within hydraulic modeling data sets. HEC-FDA Version 1.4.3 used interpolation of water surface elevations at river stations based upon river miles and that functionality is no longer supported.
Stage-Damage Convergence Criteria
The minimum/maximum number of iterations of a stage damage compute have been increased from 100/500 to 500/5000, respectively.
Barely Dry Adjustment
The water surface elevation at a particular location return -9999 from the RasMapper result when a structure is dry. To help the stage-damage algorithm better interpolate stages at structures between the frequencies of the modeled events, when the next profile (in decreasing exceedance probability) has a non-null water surface elevation, the water surface elevation will be set to 2 feet below the ground elevation. This is called the "barely dry" adjustment. Using this adjustment, HEC-FDA Version 2.0 can interpolate stages between frequencies at structures more accurately when between wet and dry events. This improvement to the consequences engine has been observed to impact results marginally. The Glendive case study is a good example of a situation where the improvement matters for results.
Validation for Illogical Depth Calculation
When modeling assumptions do not match between hydraulics and the structure inventory, it is possible to calculate a depth of flooding that is below the ground elevation at a structure - a negative depth relative to the ground elevation. While a negative depth relative to the first floor elevation can be a logical calculation, a negative depth relative to the ground elevation is not. The damage model assumes the depth of water provided is surface water with a positive depth above ground. Ground water is not a modeled concept, so depths below the ground elevation should not be used in the damage model, and HEC-FDA Version 2.0 rejects such depths, returning zero damage. HEC-FDA Version 1.4.3 did not reject such depths and included positive damage for structures with such depths in the aggregated stage-damage function. This added validation will have a marginal impact on results, and could be an important improvement for some situations with very influential structures where the modeling assumptions do not match between hydraulics and economics.
Identification of Structure's Damage Category
HEC-FDA Version 2.0 identifies the damage category of a given structure based upon the damage category of the occupancy type assigned to the structure when the user imports the structure inventory. HEC-FDA Version 2.0 does not have an attribute for damage category in the structure inventory. This change was made so that damage category has a single source of truth: the occupancy type.
Enhanced Fatal Error Handling
In some situations, HEC-FDA Version 1.4.3 will accept user input with fatal errors and compute. For example, if the user input illogical occupancy type data where the maximum is less than the minimum, the software would guess what the user meant, change the input, and compute. HEC-FDA Version 2.0 refuses to compute with data that has these fatal errors.
Automated Testing
Significant efforts were placed in confirming that the HEC-FDA behaves as expected and will continue to behave the same way with every change to the codebase. At the time that these release notes were written, there were 463 automated tests run against the codebase every time we make a change. In this section, we walk through two example automated tests. For the complete suite of automated computational tests run against the codebase, navigate to the following locations on GitHub.
Empirical Distribution Replicates Analytical Distributions
The HEC-FDA development team has done extensive testing to verify that an empirical distribution, which in this case is a histogram, works like it should. The full suite of automated tests that are run upon any change to the codebase can be found here. Below, we have included the text of an automated test of interest. In this below test, analytical distributions are created and used to fill histograms with a big sample. We then confirm that the quantiles and moments of the histograms match the quantiles and moments of the analytical distributions within a tolerance of 1% using the same sample size as that which is produced in an HEC-FDA Version 2.0 aggregated stage-damage compute.
Empirical Distribution Replicates Analytical Distributions
[Fact]
public void RecreateDistributionsWithEnoughSamples()
{
int sampleSize = 5000;
double binWidth = .01;
// Triangular distribution
double min = 90;
double max = 110;
double mode = 100;
Triangular triangularDistribution = new Triangular(min, mode, max);
List<double> samples = new List<double>();
Random random = new Random();
for (int i = 0; i < sampleSize; i++)
{
double sample = triangularDistribution.InverseCDF(random.NextDouble());
samples.Add(sample);
}
DynamicHistogram Trianglehistogram = new DynamicHistogram(min, binWidth, new ConvergenceCriteria());
Trianglehistogram.AddObservationsToHistogram(samples.ToArray());
// Normal distribution
Normal normalDistribution = new Normal(500, 4);
List<double> normalSamples = new List<double>();
Random normalRandom = new Random();
for (int i = 0; i < sampleSize; i++)
{
double sample = normalDistribution.InverseCDF(normalRandom.NextDouble());
normalSamples.Add(sample);
}
DynamicHistogram normalHistogram = new DynamicHistogram(min, binWidth, new ConvergenceCriteria());
normalHistogram.AddObservationsToHistogram(normalSamples.ToArray());
// Uniform distribution
Uniform uniformDistribution = new Uniform(100, 130);
List<double> uniformSamples = new List<double>();
Random uniformRandom = new Random();
for (int i = 0; i < sampleSize; i++)
{
double sample = uniformDistribution.InverseCDF(uniformRandom.NextDouble());
uniformSamples.Add(sample);
}
DynamicHistogram uniformHistogram = new DynamicHistogram(min, binWidth, new ConvergenceCriteria());
uniformHistogram.AddObservationsToHistogram(uniformSamples.ToArray());
//Test
double[] probabilities = new double[] { .025, 0.25, 0.5, 0.75, .975 };
double tolerance = 0.01; // Define a tolerance for comparison
foreach (double probability in probabilities)
{
double triangleoriginalValue = triangularDistribution.InverseCDF(probability);
double trianglehistogramValue = Trianglehistogram.InverseCDF(probability);
double error = Math.Abs((triangleoriginalValue - trianglehistogramValue) / triangleoriginalValue);
Assert.True(error < tolerance);
double uniformoriginalValue = uniformDistribution.InverseCDF(probability);
double uniformhistogramValue = uniformHistogram.InverseCDF(probability);
error = Math.Abs((uniformoriginalValue - uniformhistogramValue) / uniformoriginalValue);
Assert.True(error < tolerance);
double normaloriginalValue = normalDistribution.InverseCDF(probability);
double normalhistogramValue = normalHistogram.InverseCDF(probability);
error = Math.Abs((normaloriginalValue - normalhistogramValue) / normaloriginalValue);
Assert.True(error < tolerance);
}
//Triangle Moments - Here we assume median == mode because this is a symmetric distribution.
double errorCentral = Math.Abs((triangularDistribution.MostLikely - Trianglehistogram.InverseCDF(0.5)))/triangularDistribution.MostLikely;
Assert.True(errorCentral < tolerance);
//Normal Moments
errorCentral = Math.Abs(normalDistribution.Mean - normalHistogram.Mean) / normalDistribution.Mean;
double errorStd = Math.Abs((normalDistribution.StandardDeviation - normalHistogram.StandardDeviation)) / normalDistribution.StandardDeviation;
Assert.True(errorCentral < tolerance);
Assert.True(errorStd < tolerance);
}
}
Damage Calculated Correctly at Structures
The HEC-FDA development team has also done extensive testing to confirm generally that damage is being calculated correctly. In the below test, several structures are created and are matched with occupancy type data created within this test class, but outside the test block. This test confirms that the sum of structure and content damage for each structure is calculated correctly based on the water surface elevations provided.
Damage Calculated Correctly at Structures
[Fact]
public void SELA_StructureDamage_Should()
{
//1STY-PIER OccType
double[] structureDepths = new double[] { -1.1, -1, -0.5, 0, 0.5, 1, 1.5, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
double[] structurePercentDamageMin = new double[] { 0, 1.5, 1.5, 7.5, 18.8, 41.5, 41.6, 44.7, 44.8, 44.9, 46.3, 46.4, 46.5, 46.6, 68.3, 68.4, 77.6, 77.7, 77.8, 77.9, 78 };
double[] structurePercentDamageMostLikely = new double[] { 0, 4, 5.4, 20.5, 40.5, 41.5, 45.1, 52.3, 53.1, 57.1, 66.7, 66.8, 66.9, 67, 74.3, 74.4, 84.4, 84.5, 84.6, 84.7, 84.8 };
double[] structurePercentDamageMax = new double[] { 0, 9.5, 9.5, 33.5, 63.3, 64.8, 65, 69.9, 70, 71.2, 80.5, 80.6, 80.7, 80.8, 81.1, 99.5, 99.6, 99.7, 99.8, 99.9, 100 };
UncertainPairedData structureDepthPercentDamage = CreateTriangularUncertainPairedData(structureDepths, structurePercentDamageMin, structurePercentDamageMostLikely, structurePercentDamageMax);
double[] contentDepths = new double[] { 0, 0.5, 1, 1.5, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 };
double[] contentPercentDamageMin = new double[] { 0, 18.7, 30.1, 37.4, 45.6, 59.1, 70.5, 76.4, 77.7, 77.8, 80.4, 80.5, 80.6, 80.7, 80.8, 80.9, 81, 81.1 };
double[] contentPercentDamageMostLikely = new double[] { 0, 28.1, 41.8, 49.3, 62.9, 82.1, 84.6, 91.2, 91.3, 91.4, 91.5, 91.6, 91.7, 91.8, 91.9, 92, 92.1, 92.2 };
double[] contentPercentDamageMax = new double[] { 0, 28.1, 41.8, 49.3, 62.9, 82.1, 84.6, 91.2, 91.3, 91.4, 91.5, 91.6, 91.7, 91.8, 91.9, 92, 92.1, 92.2 };
UncertainPairedData contentDepthPercentDamage = CreateTriangularUncertainPairedData(contentDepths, contentPercentDamageMin, contentPercentDamageMostLikely, contentPercentDamageMax);
FirstFloorElevationUncertainty firstFloorElevationUncertainty = new FirstFloorElevationUncertainty(IDistributionEnum.Normal, 0.59);
ValueUncertainty structureValueUncertainty = new ValueUncertainty(IDistributionEnum.Triangular, 69, 116);
ValueRatioWithUncertainty csvr = new ValueRatioWithUncertainty(IDistributionEnum.Normal, 25.53, 69);
OccupancyType oneStryPier = OccupancyType.Builder()
.WithName(occupancyTypeName)
.WithDamageCategory(damageCategory)
.WithStructureDepthPercentDamage(structureDepthPercentDamage)
.WithContentDepthPercentDamage(contentDepthPercentDamage)
.WithFirstFloorElevationUncertainty(firstFloorElevationUncertainty)
.WithStructureValueUncertainty(structureValueUncertainty)
.WithContentToStructureValueRatio(csvr)
.Build();
//Structure 233375 232549
Structure structure232549 = new Structure(fid: "232549", point: pointM, firstFloorElevation: -2.35625, val_struct: 74.946944, st_damcat: damageCategory, occtype: occupancyTypeName, impactAreaID: impactAreaID);
Structure structure233375 = new Structure(fid: "233375", point: pointM, firstFloorElevation: -3.3375, val_struct: 88.204817, st_damcat: damageCategory, occtype: occupancyTypeName, impactAreaID: impactAreaID);
//0.002 AEP Stages
float wse233375 = -4.17f;
float wse232549 = -1.27f;
(double, double, double, double) consequenceResult233375 = structure233375.ComputeDamage(wse233375, oneStryPier.Sample(new MedianRandomProvider(), true));
(double, double, double, double) consequenceResult232549 = structure232549.ComputeDamage(wse232549, oneStryPier.Sample(new MedianRandomProvider(), true));
//percent damage externally interpolated from depth-percent damage function
double expectedStructureDamage232549 = 0.4213 * structure232549.InventoriedStructureValue;
double expectedStructureDamage233375 = 0.04476* structure233375.InventoriedStructureValue;
Assert.Equal(expectedStructureDamage232549, consequenceResult232549.Item1, .01);
Assert.Equal(expectedStructureDamage233375, consequenceResult233375.Item1, .01);
}
Integrate Calculates Correctly for Various Cases
Integration of a graphical damage-frequency function is a critical functionality of the HEC-FDA software. Our automated test runs several cases and confirms that the software produces a result that is within 3% of that which we've calculated externally. This particular unit test references a publicly-accessible spreadsheet in which our test case expected values are calculated. Many of our unit tests are backed up by such publicly accessible spreadsheets.
Integrate Calculates Correctly for Various Cases
[Theory]
[InlineData(new double[] { 0, .5, 1 }, new double[] { 0, 1000, 11000 }, 3250)]
[InlineData(new double[] { 0, .5 }, new double[] { 0, 1000 }, 750)]
//See the spreadsheet at the following location for the replication of the two below test cases
//https://www.hec.usace.army.mil/confluence/download/attachments/35030931/integration.xlsx?api=v2
[InlineData(new double[] {.01, .05, .5, .95, .99}, new double[] {0, 0, 192.04, 391.38, 544.37 }, 193.19)]
[InlineData(new double[] { .01, .05, .5, .95, .99 }, new double[] { 68, 323.75, 586.93, 676.6, 676.6 }, 524.40)]
public void Integrate(double[] probs, double[] vals, double expected)
{
//integrate should extrapolate last value out to probability=1 if probabilty space not defined to 1.
PairedData paired = new PairedData(probs, vals);
double actual = paired.integrate();
double relativeError = Math.Abs(actual - expected)/expected;
double relativeTolerance = 0.03;
Assert.True(relativeError < relativeTolerance);
}
Version Comparison Case Studies
Case Studies on Risk Engine Consistency
In the below table, we have organized mean expected annual damage for all alternatives in the Bear Creek Mix HEC-FDA test study as calculated in HEC-FDA Version 1.4.3 and as calculated when converted to HEC-FDA Version 2.0. For this suite of case studies, the consequences were calculated in HEC-FDA Version 1.4.3 and imported into HEC-FDA Version 2.0 as part of the study conversion process. The below case studies demonstrate that the HEC-FDA Version 2.0 risk engine works nearly identically to the HEC-FDA Version 1.4.3 risk engine. No difference in expected annual damage is greater than 6%, and two of the three cases that differ by the maximum of 6% involve the use of the new analytical flow-frequency uncertainty procedures in HEC-FDA Version 2.0 that are consistent with the guidelines in USGS Bulletin 17C. The bear creek mix tests in whole demonstrate that the following variables are being incorporated in the risk compute correctly:
- Analytical flow-frequency
- Graphical flow-frequency
- Graphical stage-frequency
- Flow regulation with Normal uncertainty
- Flow regulation with triangular uncertainty
- Stage-discharge with Normal uncertainty
- Stage-discharge with triangular uncertainty
- Levees
- Levees with exterior-interior functions
- Levees with system response curve
- Levees with system response curve and exterior-interior function
Plan | Frequency | Flow Frequency Type | Stage Discharge | Stage Discharge Type | Levee | 1.4.3 EAD | 2.0 EAD | Percent Change |
---|---|---|---|---|---|---|---|---|
Without | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Yr | Normal Stages | 415.73 | 405.42 | -2% | |
1 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Yr | Normal Stages | New Levee | 197.61 | 190.93 | -3% |
2 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Yr | Normal Stages | Levee Geo | 237.97 | 224.56 | -6% |
3 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Yr | Normal Stages | Levee Int/Ext | 90.83 | 87.072 | -4% |
4 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Yr | Normal Stages | Levee GeoIntExt | 129.54 | 127.26 | -2% |
5 | SF9 WO B Tf1G | Graphical Flow + Normal Regulation | SF-9 WO Base Yr | Normal Stages | 546.26 | 543.73 | 0% | |
6 | SF9 WO Base TfA | Analytical + Triangular Regulation | SF-9 WO Base Yr | Normal Stages | 523.26 | 498.19 | -5% | |
7 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Yr | Normal Stages | Levee GeoA | 237.94 | 224.00 | -6% |
8 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Yr | Normal Stages | Levee GeoAIntExt | 129.54 | 122.31 | -6% |
9 | SF9 WO Base A | Analytical | SF-9 WO Base Yr | Normal Stages | Levee GeoAIntExt | 131.06 | 128.11 | -2% |
10 | SF9 WO Stage | Graphical Stage | 623.76 | 618.56 | -1% | |||
11 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Tr | Triangular Stages | New Levee | 218.68 | 210.21 | -4% |
12 | 12As | Analytical | SF-9 WO Base Yr | Normal Stages | 553.88 | 541.44 | -2% | |
13 | 13Am | Analytical | SF-9 WO Base Yr | Normal Stages | 547.56 | 561.30 | 3% | |
14 | SF-9 14 QF | Graphical Flow | SF-9 WO Base Yr | Normal Stages | 600.28 | 580.09 | -3% | |
15 | SF-9 15SF | Graphical Stage | 622.9 | 618.56 | -1% | |||
16 | SF-9 16 QF | Graphical Flow | SF-9 WO Base Yr | Normal Stages | 588.12 | 580.09 | -1% | |
17 | SF-9 17 SF | Graphical Stage | 594.45 | 608.16 | 2% | |||
18 | 13Am | Analytical | SF-9 WO Base Yr | Normal Stages | New Levee | 196.52 | 207.19 | 5% |
19 | 13Am | Analytical | SF-9 WO Base Yr | Normal Stages | Levee Geo | 237.26 | 249.27 | 5% |
20 | SF9 WO Base Year No Uncertainty | Graphical Flow | SF-9 WO Base Yr | Normal Stages | 564.42 | 580.09 | 3% | |
23 | SF9 WO Base Year | Graphical Flow | Wo Sqnorm | Normal Stages | 439.31 | 429.32 | -2% | |
24 | SF9 WO Base Year | Graphical Flow | Wo SQtriOrd | Triangular Stages | 460.59 | 449.72 | -2% | |
25 | SF9 WO Base Year | Graphical Flow | SF-9 WO Base Tri G | Triangular Stages | 460.57 | 449.71 | -2% |
The data used in the above table can be downloaded and inspected here: Bear Creek Mix EAD Comparison.7z.
Case Study on Correlation of Economic Error
The stage-damage algorithm in HEC-FDA Version 2.0 assumes perfect correlation of economic error for a given occupancy type in a given impact area. The HEC-FDA development team has studied this change in assumption and confirmed that risk estimates are insensitive to this assumption. Among the New Orleans test study data is an HEC-FDA Version 2.0 study that has 1 impact area and another HEC-FDA Version 2.0 study that has 76 impact areas. The single impact area reflects an extreme assumption that correlation about economic error is perfect about the entire impact area versus 76 independent impact areas. The results of this experiment are organized in the below table. Mean EAD is little changed between the two extremes.
Quantity of Impact Areas | Mean EAD |
---|---|
1 | $41,742.95 |
76 | $41,776.65 |
Case Studies on Enhanced Consequences Methodology
A series of models were built to compare mean expected annual damage for the without-project condition of several different existing study data sets using Version 1.4.3 and Version 2.0 to assess the effect of the modernization of the consequences methodology on mean risk estimates. The below data sets have been modified/simplified from the original versions of the study data set for the purpose of this experiment. The modeling that went into the below version comparisons is available to USACE, please reach out to the HEC-FDA team for the location of the data.
The results organized in the below table suggest that the overall impact of the set of improvements to the consequences methodology is marginal. For 7 of the 9 case studies on the difference in the consequences methodology, the resulting mean study-level risk estimate produced by HEC-FDA Version 2.0 is within 10% of the mean study-level risk estimate produced in HEC-FDA Version 1.4.3. Several of the case studies include damage category differences between versions between 10% and 20% that are often in opposite directions, canceling each other out. Only one example at the damage category level is the percent difference greater than 20%: Glendive residential damage, largely explained by the difference in the identification of aggregation stages, supported the barely dry adjustment.
Study Name | Damage Category | Version 1.4.3 Mean EAD | Version 2.0 Mean EAD | Percent Difference of 2.0 Relative to 1.4.3 |
Glendive | RES | $ 172.31 | $ 233.28 | 35% |
COM | $ 862.65 | $ 1,084.76 | 26% | |
Total | $ 1,034.96 | $ 1,318.05 | 27% | |
Greenbrook | COM | $ 16,548.30 | $ 17,405.12 | 5% |
IND | $ 12,019.71 | $ 12,591.54 | 5% | |
UTL | $ 11.71 | $ 13.63 | 16% | |
MUN | $ 4,062.37 | $ 4,351.50 | 7% | |
APT | $ 28,587.21 | $ 28,302.71 | -1% | |
RES | $ 191,188.95 | $ 189,751.39 | -1% | |
TOTAL | $ 252,418.25 | $ 252,415.90 | 0% | |
Muncie | COM | $ 87.66 | $ 83.70 | -5% |
IND | $ 50.77 | $ 46.95 | -8% | |
PUB | $ 717.59 | $ 661.27 | -8% | |
RES | $ 362.25 | $ 381.87 | 5% | |
TOTAL | $ 1,218.27 | $ 1,173.80 | -4% | |
London Orleans | COM | $ 9,329.13 | $ 9,746.14 | 4% |
IND | $ 2,312.76 | $ 2,394.89 | 4% | |
PUBL | $ 1,273.24 | $ 1,303.40 | 2% | |
RES | $ 34,649.39 | $ 35,041.10 | 1% | |
TOTAL | $ 47,564.52 | $ 48,485.52 | 2% | |
North DeSoto | COM | $ 2,430.96 | $ 2,884.51 | 19% |
Ag | $ 1.29 | $ 1.44 | 12% | |
IND | $ 126.05 | $ 155.50 | 23% | |
PUB | $ 54.85 | $ 65.30 | 19% | |
RES | $ 1,102.17 | $ 1,287.41 | 17% | |
TOTAL | $ 3,715.32 | $ 4,394.16 | 18% | |
River Des Peres | COM | $ 2,734.46 | $ 2,593.88 | -5% |
IND | $ 182.12 | $ 181.51 | 0% | |
PUB | $ 402.31 | $ 438.97 | 9% | |
RES | $ 3,877.67 | $ 4,013.10 | 3% | |
TOTAL | $ 7,196.56 | $ 7,227.46 | 0% | |
Tafuna | COM | $ 756.43 | $ 724.68 | -4% |
IND | $ 248.78 | $ 259.79 | 4% | |
PUB | $ 1,540.51 | $ 1,409.22 | -9% | |
RES | $ 10,157.57 | $ 9,935.31 | -2% | |
TOTAL | $ 12,332.24 | $ 12,329.01 | 0% | |
Watertown | COM | $ 1,951.82 | $ 2,263.48 | 16% |
IND | $ 741.55 | $ 874.71 | 18% | |
PUB | $ 94.65 | $ 108.61 | 15% | |
RES | $ 2,123.80 | $ 2,479.82 | 17% | |
TOTAL | $ 4,911.82 | $ 5,726.62 | 17% | |
West Sacramento | COM | $ 1,223.10 | $ 1,119.41 | -8% |
IND | $ 516.56 | $ 475.72 | -8% | |
PUB | $ 27.42 | $ 26.60 | -3% | |
RES | $ 326.30 | $ 282.02 | -14% | |
TOTAL | $ 2,093.40 | $ 1,903.75 | -9% |
Stage-Damage Function Comparison
When the enhancements to the stage-damage algorithm matter for expected annual damage, the result is typically visible in the resulting stage-damage functions. Below, the aggregated stage-damage functions with uncertainty produced by the two software versions, 1.4.3 and 2.0, for three different studies: Glendive, Greenbrook, and Tafuna. Stage of flooding is on the x-axis and aggregated residential structure damage in dollars is on the y-axis. The 5th percentile, mean, and 95th percentile of the damage distributions produced by each version are plotted with tails represented by crosses and means represented by points. The median has been plotted for Tafuna using triangle symbols to illustrate the resulting asymmetry in Version 2.0. Version 2.0 results are plotted in blue and 1.4.3 results are plotted in yellow.
Glendive Residential Structure Damage
Uncertainty in Glendive residential structure damage is greater in HEC-FDA Version 2.0, but is relatively symmetric. The means of the resulting distributions typically match, except for several compute points for stages between 14.5 feet and 16.5 feet for which the difference in damage between versions is significant. For these stages, HEC-FDA Version 2.0 calculates damage while HEC-FDA Version 1.4.3 does not. The difference is the result of the improvement in 2.0 to the identification of aggregation stages, supported by the barely dry adjustment. This feature of the Glendive 2.0 function where, appropriately, there is positive damage in 2.0 for several frequent stages but not in 1.4.3 explains why expected annual damage is ultimately 35% higher in 2.0.
Greenbrook Residential Structure Damage
Uncertainty in Greenbrook residential structure damage is greater in HEC-FDA Version 2.0, but is relatively symmetric. The means of the resulting distributions typically match, except for several compute points for stages between 103 feet and 104 feet for which the difference in damage between versions is very small. The symmetry of the uncertainty and the very close match between the means explains why Greenbrook residential expected annual damage is only 1% lower in 2.0.
Tafuna Public Structure Damage
The mean Tafuna public structure damage function rests lower in 2.0 than in 1.4.3, and uncertainty in damage is both greater and asymmetric in HEC-FDA Version 2.0. The lower central tendency and skewed right uncertainty both explain the 8% less EAD in 2.0 for public structure damage for this impact area, a result consistent with the 19% less EAD in 2.0 for public structure damage for the study area.
The difference in the central tendency is a result of the added validation for illogical depth calculations. This impact area contains public structures with basement depth-percent damage functions, no beginning damage depth specified, and low foundation heights (largely between 0 and 2ft). As a result, the stages interpolated between frequencies for the aggregated stage-damage compute result in damage at these structures for some illogical depths in 1.4.3, but 2.0 does not compute damage for these depths. There is enough variation in the frequency with which public structures begin to get damaged in this impact area for this difference to matter for such a large range of stages.
The effect of the skew can be understood by observing the location of the median damage function in 2.0 relative to the mean damage function. The median damage function has been added to the below plot with triangle symbols. The median is less than the mean because this function is skewed right, and the mean is more sensitive to the right tail than the median. When a Normal distribution is forced, probability mass is forced upward so that 50% of the mass is below the mean and 50% is above the mean, biasing results upward.
Known Bugs and Bug Reporting
There are known bugs and inconveniences within the HEC-FDA Version 2.0 software. For example, the price index under study properties does not work. Users must update structure and content values manually in the inventory before uploading to HEC-FDA. Also, the study projection button from the hydraulics editor window is not functional and needs to be accessed from the study properties editor under the file menu. An example inconvenience is the extended time required for loading data. The software takes longer that desirable to load stage-damage functions after a compute or to open up the editor for a given scenario. We are also not able to export occupancy type data. Navigate to our Issues page on GitHub to inspect known bugs or needed enhancements. Submit troubleshooting questions to our Discourse page. If the team determines that a bug has been identified, then we may ask you to report the bug on GitHub.