Sonic and Visual Representations of Seismic Data, Coupled to Machine Listening and Pattern Discovery
Description:
Since about 2005, participants in the Seismic Sound Lab activities have been exploring the use of sonification and synchronized data animations ("data-movies") to convey a wide range of aspects of earthquake physics and other natural signals beyond our normal perceptual reach, from laboratory acoustic emissions to decades of earthquakes. We also explore the use of multi-modal data sonifications: data representing different, coupled expressions of a complex process, such as seismicity and ground deformation in Kilauea (Karlstrom et al, "Earth is Noisy. Why should its data be silent?" EOS, 2024), geysers and geothermal reservoirs. Multi-modal data sonification requires a range of methods that span from "direct sonification" to "tonal representation," with an increasing need for aesthetic choices from the former to the latter. Tonal representation is required for non-oscillatory signals, using a broad range of methods from simple beeps to granular synthesis. Our sonification work also has a close and growing connection to unsupervised machine learning methods aimed at pattern discovery, loosely called machine listening. Our seismic catalog sonifications and listening experiments led to a collaboration with John Paisley, a data scientist, who had previously worked on feature extraction methods for speech and audio analysis. He developed what we now call SpecUFEx, to detect subtle variations in the spectral-temporal content of microseismicity and ambient noise, first applied to a catalog from The Geysers (Holtzman et al, Science Advances, 2018). In this talk, we will show data-movies from that paper, that illustrate the patterns detectable by human listening and then those that emerge with machine listening. We will discuss some current work on detection of multi-scale patterns in acoustic emissions in laboratory experiments. We will discuss the similarities between speech/audio and seismic data as they pertain to the feature extraction methods, and some perspectives on the benefits of iterating between human and machine listening.
Session: Visualization and Sonification in Solid Earth Geosciences, What’s Next? - I
Type: Oral
Date: 4/17/2025
Presentation Time: 11:00 AM (local time)
Presenting Author: Benjamin
Student Presenter: No
Invited Presentation:
Poster Number:
Authors
Benjamin Holtzman Presenting Author Corresponding Author benh@mit.edu Massachusetts Institute of Technology |
Anna Barth anna@straboengineering.com Strabo Engineering Inc. |
Eric Beaucé ebeauce@ldeo.columbia.edu Columbia University |
|
|
|
|
|
|
Sonic and Visual Representations of Seismic Data, Coupled to Machine Listening and Pattern Discovery
Category
Visualization and Sonification in Solid Earth Geosciences, What’s Next?