Recently, the media reported an ecclesiastical forecast of the end of the earth. Dubbed the “rapture,” most of us are thankful it was inaccurate. But the same source, likely using the same information updated with one piece of hard fact—the non-event—now forecasts the rapture for Oct. 21, 2011. This typifies the great uncertainty in forecasting extreme events.
The past decade has seen a number of lesser but quite extreme events, including Hurricane Katrina, earthquakes in Haiti, Chile and New Zealand, and tsunami in Japan. Common to all are extraordinary natural forces, the inadequacy of preparations, great losses and shocked surprise.
Why are we surprised by powerful disasters? Events of similar intensity to the disasters cited have occurred before. In 1993, a tsunami hit the Japanese island of Hokkaido, west and just above the island of Honshu, whose east coast was devastated by a tsunami in March. The 1993 tsunami generated wave heights comparable to the 2011 event.
A recent ENR article by Senior Editor Tom Sawyer included a graphic based on geologic data that showed evidence of multiple large prehistoric tsunamis on the Oregon coast. On the U.S. East Coast, geologic data indicates a large tsunami washed over what is now New York City 2,300 years ago.
Of the three components of risk—the hazard, the reliability of systems to mitigate the hazard and the consequences of the event—hazard is the most difficult to quantify, especially with rare events. As ENR reported, scientists in Japan did not think a magnitude-9 earthquake would occur off Honshu's coast, thus the tsunami mitigation measures in place were designed for a lesser event. The consequences for the most tsunami-prepared country were devastating.
Hazards often involve processes operating on geologic scales of time and space. But we tend to think about them on the human scale. Too often we use relatively recent data records and extrapolate to the geological time scales. To simplify analysis, we lean on the assumption that past events are representative of future events. We assume the most severe events recorded—or even worse, the most severe events we remember—represent the worst nature has to offer. It should be no surprise that this approach has not served us well. Uncertainty in natural processes and our failure to estimate uncertainty have often led to surprise and dire consequences.
While risk and uncertainty are considered with respect to operational safety in the design of critical infrastructure—such as large dams, powerplants and industrial facilities—they are only now emerging as critical factors in the designs for natural hazard mitigation.
The new Hurricane Storm Damage Risk Reduction System in New Orleans is one of the first large-scale examples of applying calculated uncertainty to risk-mitigation design. Instead of using the statistically most-likely water levels for an event that has a 1% chance of occuring in any given year, or 100-year return period, the design criteria added water heights based on the quantified uncertainty—a more conservative design.
A second initiative to factor uncertainty into design is now using comprehensive risk assessment to determine where armoring would increase resilience of the levees and their ability to withstand overtopping. Levee erosion after overtopping was the primary cause of catastrophic breaching during Katrina. Resilient structures could have halved flooding and losses.
We need to acknowledge the uncertainty in natural hazard events and routinely estimate and apply uncertainty factors to the design and implementation of risk-reduction measures. The New Orleans example is a crude but welcome move. Does this cost more? Initially, yes. Over time, formally factoring in risk can bring enormous savings, not only monetarily but also in reducing major interruptions to society and the economy. There is rapture in that.
Dr. Lewis E. “Ed” Link is a research professor in the department of civil and environmental engineering at the University of Maryland. Contact him at email@example.com.