For earthquake forecast models that generate synthetic catalogs down to small magnitudes, there are many ways in which they can be tested rigorously. One can look for systematic differences that distinguish stacked features of observed and synthetic catalogs (“seismological Turing tests”) and evaluate the likelihood that the model would produce different aspects of observations (as many Collaboratory for the Study of Earthquake Predictability (CSEP) tests do). Many of the more controversial aspects of fault-based earthquake rupture forecasts, however, such as large earthquake rates and their variability, remain largely untestable on short time scales. This limitation hampers the ability to validate the very elements of earthquake forecast models that have the greatest societal importance.
There are several paths forward from this testing dilemma. One can, when possible, obtain more data by enlarging the testing region, effectively trading space for time. The underlying hypotheses used in models can also be tested directly. When testing the model is not possible, we can choose the simplest model consistent with observations, or sample over modeling uncertainties, including different models entirely. I will discuss these different testing paths in the context of fault-based seismic hazard models in California and the Western U.S.
Session: Testing, Testing 1 2 3: Appropriate Evaluation of New Seismic Hazard and Risk Models - I
Type: Oral
Date: 4/15/2025
Presentation Time: 08:30 AM (local time)
Presenting Author: Morgan
Student Presenter: No
Invited Presentation: Yes
Poster Number:
Authors
Morgan Page
Presenting Author
Corresponding Author
pagem@caltech.edu
U.S. Geological Survey
At the Testing Frontier
Category
Testing, Testing 1 2 3: Appropriate Evaluation of New Seismic Hazard and Risk Models