Nearly everyone who deals in scientific information learns to read simple charts and graphs to help visualize the data. As a reporter, I’m often looking for the right graph to bring greater meaning to a story. In a similar way, some people have been experimenting with rendering data into sound, and some of the more musically inclined folks have been creating songs with notes and musical scales.
As with graphs, one must understand the conceptual framework before the meaning becomes clear. On the other hand, anyone can simply enjoy the music — or at least be amused that the notes themselves are somehow transformed from observations of the real world.
The first video on this page, titled “Bloom,” contains a “song” derived from microorganisms found in the English Channel. The melody depicts the relative abundance of eight different types of organisms found in the water as conditions change over time. Peter Larsen, a biologist at the U.S. Department of Energy’s Argonne National Laboratory in Illinois, explains how he created the composition to Steve Curwood, host of the radio program “Living on Earth.”
Larsen, who calls this endeavor “microbial bebob” for its jazzy style, created four compositions using the same dataset in different ways. All four can be found attached to a news release written by Jared Sagoff at Argonne National Laboratory. For more detail on the project itself, go to Larsen’s report published in the scientific journal Plos One.
A more classical style of music was created by Nik Sawe (pronounced saw-vay), who was a graduate student at California’s Stanford University at the time. Sawe used data collected from the yellow cedar forests of Alaska’s Alexander Archipelago by another Stanford doctoral candidate, Lauren Oakes, who was studying climate change. (Her research report was published by Ecosphere.) Click on the red arrow below to listen.
Each musical note depicts a single tree. Dead trees are depicted by a dropped note. The species of tree determines which musical instrument will play the note. Yellow cedars are played by a piano. The pitch conveys the size of the tree, and the loudness conveys the age, as described in an article by Brian Kahn of Climate Central.
The song describes how rising temperatures and declining snowpack are killing off yellow cedars, a culturally significant tree for at least 9,000 years. The song begins with trees in the north, near Glacier Bay National Park and Preserve, and progresses to areas more impacted by climate change to the south, ending near Slocum Arm, as described in a story by Brad Rassler in Outside magazine. Piano notes dominate the early part of the composition, but those notes become more sparse toward the end, where the flute (western hemlock) begins to dominate.
“Throughout the piece, Sawe wanted to highlight the relationship between the native yellow cedar and invasive western hemlock,” Rassier writes. “He braided the sounds of the two species, both to amplify their voices and to highlight the fall of one and the rise of the other.
“Just as the keyboard and strings in Mozart’s ‘Sonata for Piano and Violin in E minor’ play off of one another to create a musicality greater than the sum of their parts, this musical death dance between the two becomes, in its own way, the sound of climate change.”
Another example of transforming data into sound uses the same dataset in the following piece:
Sawe is a pioneer in environmental neuroeconomics, the study of how the environment influences people’s spending decisions. Such decisions can involve donating to environmental causes, including efforts to reduce the impacts of climate change. Sawe’s studies suggest that humans tend to protect and restore the environment when they are confronted with stimuli that elicit either good feelings or moral outrage. For more on his work, I recommend his Tedx Talk, shown in the second video on this page.
Since environmental decisions are largely based on emotion, Sawe is exploring how logic can affect feelings and vice versa. As part of his work, he is trying to figure out how sonification — turning data into sound — can bridge the divide between the right and left sides of the brain.
My final example of sonification involves data produced by the solar wind and turned into a sophisticated musical composition by a formally trained musician, Robert Alexander of the University of Michigan.
Jason Gilbert, a research fellow in the Department of Atmospheric, Oceanic and Space Sciences at the UM, obtained the raw data from a satellite called the Advanced Composition Explorer.
“In this sonification, we can actually hear in the data when the temperature goes up or when the density increases,” said Gilbert, quoted in a UM news release.
Since this blog post is about sound, I’m glad that I can share this audio about the solar sonification project, as discussed by “Living on Earth.”
For those of us who enjoy this music and want to connect to the data, the greatest challenge is to understand what is depicted by the specific combination of notes. The burden falls to the scientist and/or musician putting together the sonification. And just as graphs must be carefully labeled, the listener must be given a proper road map.
“It’s just like if you were to open up the Astrophysical Journal to any random page, show it to someone on the street, and ask if they could learn anything from a random visual diagram,” Alexander said in an article in Earthzine magazine. “If they don’t understand what’s being represented, if they don’t understand what the colors mean, if they don’t understand the axes, they can’t extract any of the information presented there.”
It is one thing for the music to explain something to others, but hearing the data also can open new doors of insight.
“Ninety-nine percent of the time it’s easy enough to explain what you’re hearing,” Alexander added, “but that small fraction of the time where you hear something and it hasn’t been documented before, that’s really exciting.”