priss
cell type-specific suppression of mechanosensitive genes by audible sound stimulation
masahiro kumeta et al. 2018
doi.org/10.1371/journal.pone.0188764
feature-selective encoding of substrate vibrations in the forelimb somatosensory cortex
mario prsa et al. 2019
doi.org/10.1038/s41586-019-1015-8
Using two-photon microscopy, Daniel Huber's team visualized the activity of hundreds of neurons in a mouse's somatosensory cortex as vibrations of different frequencies were delivered to its forepaw. Like in the auditory cortex, individual neurons were selectively tuned: they strongly responded to some frequencies and less so to others. "It turns out that these neurons are preferentially tuned to a specific combination of frequency and amplitude, and that this combination corresponds to what the mouse actually perceives. In other words, a mouse is unable to distinguish a high-frequency vibration with a low amplitude from a low-frequency vibration with a higher amplitude," explains Mario Prsa, a researcher in Dr. Huber's team and the study's first author. "It is the same psychoacoustic effect detected in the auditory system, where the perceived pitch of a sound changes with both frequency and loudness." Thus, despite the fact that sounds -- which travel through the air -- and vibrations -- which are transmitted through solid matter -- are processed by different sensory channels, they are both perceived and encoded similarly in the brain.
Everything goes through Pacinian corpuscles
In a second step, the researchers sought to identify the origin of the somatosensory stimuli involved by performing a detailed histological analysis of Pacinian corpuscles in mouse forelimb. Pacinian corpuscles are known to transduce high frequency vibrations in mammals and are densely expressed in the dermis of primate fingertips. "Surprisingly, we found that the vibration responses in the mouse brain stem from Pacinian corpuscles located on the forearm bones, whereas they were totally absent in the paw's skin," explains Géraldine Cuenu, a student in the UNIGE master's programme in neurosciences, who took charge of this detailed analysis. Using optogenetics, the scientists confirmed the link between cortical responses and the particular configuration of mechanoreceptors in the forelimbs.
An ancestor of the hearing system?
Could it be that the particular distribution of vibration-sensitive mechanoreceptors along the bones of the forelimb acts as a seismograph to "listen" to vibrations? Vibratory stimuli are indeed used by a number of living organisms to communicate through plants, branches and other solid substrates. "Our discoveries probably reveal the existence of an ancient sensory channel, which could be an evolutionary precursor of hearing," concludes Mario Prsa. This somewhat vestigial, yet highly sensitive modality might also explain how we are able to identify subtle clues linked to upcoming natural disasters, or why construction or traffic causes nuisances even when inaudible.
abstract The spectral content of skin vibrations, produced by either displacing the finger across a surface texture1 or passively sensing external movements through the solid substrate2,3, provides fundamental information about our environment. Low-frequency flutter (below 50 Hz) applied locally to the primate fingertip evokes cyclically entrained spiking in neurons of the primary somatosensory cortex (S1), and thus spike rates in these neurons increase linearly with frequency4,5. However, the same local vibrations at high frequencies (over 100 Hz) cannot be discriminated on the basis of differences in discharge rates of S1 neurons4,6, because spiking is only partially entrained at these frequencies6. Here we investigated whether high-frequency substrate vibrations applied broadly to the mouse forelimb rely on a different cortical coding scheme. We found that forelimb S1 neurons encode vibration frequency similarly to sound pitch representation in the auditory cortex7,8: their spike rates are selectively tuned to a preferred value of a low-level stimulus feature without any temporal entrainment. This feature, identified as the product of frequency and a power function of amplitude, was also found to be perceptually relevant as it predicted behaviour in a frequency discrimination task. Using histology, peripheral deafferentation and optogenetic receptor tagging, we show that these selective responses are inherited from deep Pacinian corpuscles located adjacent to bones, most densely around the ulna and radius and only sparsely along phalanges. This mechanoreceptor arrangement and the tuned cortical rate code suggest that the mouse forelimb constitutes a sensory channel best adapted for passive ‘listening’ to substrate vibrations, rather than for active texture exploration.
a longitudinal study of the process of acquiring absolute pitch: a practical report of training with the ‘chord identification method’
ayako sakakibara 2014
doi.org/10.1177%2F0305735612463948
anki deck for pitch training (adults)
single notes, one octave
ankiweb
polyphonic overtone singing
anna-maria hefele et al. 2019
youtube
ultra-open acoustic metamaterial silencer based on fano-like interference
reza ghaffarivardavagh et al. 2019
doi.org/10.1103/physrevb.99.024302
Ghaffarivardavagh and Zhang let mathematics -- a shared passion that has buoyed both of their engineering careers and made them well-suited research partners -- guide them toward a workable design for what the acoustic metamaterial would look like.
They calculated the dimensions and specifications that the metamaterial would need to have in order to interfere with the transmitted sound waves, preventing sound -- but not air -- from being radiated through the open structure. The basic premise is that the metamaterial needs to be shaped in such a way that it sends incoming sounds back to where they came from, they say.
As a test case, they decided to create a structure that could silence sound from a loudspeaker. Based on their calculations, they modeled the physical dimensions that would most effectively silence noises. Bringing those models to life, they used 3D printing to materialize an open, noise-canceling structure made of plastic.
Trying it out in the lab, the researchers sealed the loudspeaker into one end of a PVC pipe. On the other end, the tailor-made acoustic metamaterial was fastened into the opening. With the hit of the play button, the experimental loudspeaker set-up came oh-so-quietly to life in the lab. Standing in the room, based on your sense of hearing alone, you'd never know that the loudspeaker was blasting an irritatingly high-pitched note. If, however, you peered into the PVC pipe, you would see the loudspeaker's subwoofers thrumming away.
The metamaterial, ringing around the internal perimeter of the pipe's mouth, worked like a mute button incarnate until the moment when Ghaffarivardavagh reached down and pulled it free. The lab suddenly echoed with the screeching of the loudspeaker's tune.
"The moment we first placed and removed the silencer...was literally night and day," says Jacob Nikolajczyk, who in addition to being a study co author and former undergraduate researcher in Zhang's lab is a passionate vocal performer. "We had been seeing these sorts of results in our computer modeling for months -- but it is one thing to see modeled sound pressure levels on a computer, and another to hear its impact yourself."
By comparing sound levels with and without the metamaterial fastened in place, the team found that they could silence nearly all -- 94 percent to be exact -- of the noise, making the sounds emanating from the loudspeaker imperceptible to the human ear.
Now that their prototype has proved so effective, the researchers have some big ideas about how their acoustic-silencing metamaterial could go to work making the real world quieter.
"Drones are a very hot topic," Zhang says. Companies like Amazon are interested in using drones to deliver goods, she says, and "people are complaining about the potential noise."
"The culprit is the upward-moving fan motion," Ghaffarivardavagh says. "If we can put sound-silencing open structures beneath the drone fans, we can cancel out the sound radiating toward the ground."
Closer to home -- or the office -- fans and HVAC systems could benefit from acoustic metamaterials that render them silent yet still enable hot or cold air to be circulated unencumbered throughout a building.
Ghaffarivardavagh and Zhang also point to the unsightliness of the sound barriers used today to reduce noise pollution from traffic and see room for an aesthetic upgrade. "Our structure is super lightweight, open, and beautiful. Each piece could be used as a tile or brick to scale up and build a sound-canceling, permeable wall," they say.
The shape of acoustic-silencing metamaterials, based on their method, is also completely customizable, Ghaffarivardavagh says. The outer part doesn't need to be a round ring shape in order to function.
"We can design the outer shape as a cube or hexagon, anything really," he says. "When we want to create a wall, we will go to a hexagonal shape" that can fit together like an open-air honeycomb structure.
Such walls could help contain many types of noises. Even those from the intense vibrations of an MRI machine, Zhang says.
According to Stephan Anderson, a professor of radiology at BU School of Medicine and a coauthor of the study, the acoustic metamaterial could potentially be scaled "to fit inside the central bore of an MRI machine," shielding patients from the sound during the imaging process.
Zhang says the possibilities are endless, since the noise mitigation method can be customized to suit nearly any environment: "The idea is that we can now mathematically design an object that can block the sounds of anything," she says
abstract Recently, with advances in acoustic metamaterial science, the possibility of sound attenuation using subwavelength structures, while maintaining permeability to air, has been demonstrated. However, the ongoing challenge addressed herein is the fact that among such air-permeable structures to date, the open area represents only small fraction of the overall area of the material. In the presented paper in order to address this challenge, we first demonstrate that a transversely placed bilayer medium with large degrees of contrast in the layers' acoustic properties exhibits an asymmetric transmission, similar to the Fano-like interference phenomenon. Next, we utilize this design methodology and propose a deep-subwavelength acoustic metasurface unit cell comprising nearly 60% open area for air passage, while serving as a high-performance selective sound silencer. Finally, the proposed unit-cell performance is validated experimentally, demonstrating a reduction in the transmitted acoustic energy of up to 94%. This ultra-open metamaterial design, leveraging a Fano-like interference, enables high-performance sound silencing in a design featuring a large degree of open area, which may find utility in applications in which highly efficient, air-permeable sound silencers are required, such as smart sound barriers, fan or engine noise reduction, among others.
population rate-coding predicts correctly that human sound localization depends on sound intensity
antje ihlefeld et al. 2019
doi.org/10.7554/elife.47027
Unlike other sensory perceptions, such as feeling where raindrops hit the skin or being able to distinguish high notes from low on the piano, the direction of sounds must be computed; the brain estimates them by processing the difference in arrival time across the two ears, the so-called interaural time difference (ITD). A longstanding consensus among biomedical engineers is that humans localize sounds with a scheme akin to a spatial map or compass, with neurons aligned from left to right that fire individually when activated by a sound coming from a given angle -- say, at 30 degrees leftward from the center of the head.
But in research published this month in the journal eLife, Antje Ihlefeld, director of NJIT's Neural Engineering for Speech and Hearing Laboratory, is proposing a different model based on a more dynamic neural code. The discovery offers new hope, she says, that engineers may one day devise hearing aids, now notoriously poor in restoring sound direction, to correct this deficit.
"If there is a static map in the brain that degrades and can't be fixed, that presents a daunting hurdle. It means people likely can't "relearn" to localize sounds well. But if this perceptual capability is based on a dynamic neural code, it gives us more hope of retraining peoples' brains," Ihlefeld notes. "We would program hearing aids and cochlear implants not just to compensate for an individual's hearing loss, but also based upon how well that person could adapt to using cues from their devices. This is particularly important for situations with background sound, where no hearing device can currently restore the ability to single out the target sound. We know that providing cues to restore sound direction would really help."
What led her to this conclusion is a journey of scholarly detective work that began with a conversation with Robert Shapley, an eminent neurophysiologist at NYU who remarked on a peculiarity of human binocular depth perception -- the ability to determine how far away a visual object is -- that also depends on a computation comparing input received by both eyes. Shapley noted that these distance estimates are systematically less accurate for low-contrast stimuli (images that are more difficult to distinguish from their surrounding) than for high-contrast ones.
Ihlefeld and Shapley wondered if the same neural principle applied to sound localization: whether it is less accurate for softer sounds than for louder ones. But this would depart from the prevailing spatial map theory, known as the Jeffress model, which holds that sounds of all volumes are processed -- and therefore perceived -- the same way. Physiologists, who propose that mammals rely on a more dynamic neural model, have long disagreed with it. They hold that mammalian neurons tend to fire at different rates depending on directional signals and that the brain then compares these rates across sets of neurons to dynamically build up a map of the sound environment.
"The challenge in proving or disproving these theories is that we can't look directly at the neural code for these perceptions because the relevant neurons are located in the human brainstem, so we cannot obtain high-resolution images of them," she says. "But we had a hunch that the two models would give different sound location predictions at a very low volume."
They searched the literature for evidence and found only two papers that had recorded from neural tissue at these low sounds. One study was in barn owls -- a species thought to rely on the Jeffress model, based on high-resolution recordings in the birds' brain tissue -- and the other study was in a mammal, the rhesus macaque, an animal thought to use dynamic rate coding. They then carefully reconstructed the firing properties of the neurons recorded in these old studies and used their reconstructions to estimate sound direction both as a function of ITD and volume.
"We expected that for the barn owl data, it really should not matter how loud a source is -- the predicted sound direction should be really accurate no matter the sound volume -- and we were able to confirm that. However, what we found for the monkey data is that predicted sound direction depended on both ITD and volume," she said. "We then searched the human literature for studies on perceived sound direction as a function of ITD, which was also thought not to depend on volume, but surprisingly found no evidence to back up this long-held belief."
She and her graduate student, Nima Alamatsaz, then enlisted volunteers on the NJIT campus to test their hypothesis, using sounds to test how volume affects where people think a sound emerges.
"We built an extremely quiet, sound-shielded room with specialized calibrated equipment that allowed us to present sounds with high precision to our volunteers and record where they perceived the sound to originate. And sure enough, people misidentified the softer sounds," notes Alamatsaz.
"To date, we are unable to describe sound localization computations in the brain precisely," adds Ihlefeld. "However, the current results are inconsistent with the notion that the human brain relies on a Jeffress-like computation. Instead, we seem to rely on a slightly less accurate mechanism.
More broadly, the researchers say, their studies point to direct parallels in hearing and visual perception that have been overlooked before now and that suggest that rate-based coding is a basic underlying operation when computing spatial dimensions from two sensory inputs.
"Because our work discovers unifying principles across the two senses, we anticipate that interested audiences will include cognitive scientists, physiologists and computational modeling experts in both hearing and vision," Ihlefeld says. "It is fascinating to compare how the brain uses the information reaching our eyes and ears to make sense of the world around us and to discover that two seemingly unconnected perceptions -- vision and hearing -- may in fact be quite similar after all."
abstract Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.
musical instruments as sensors
heran c. bhakta et al. 2018
doi.org/10.1021/acsomega.8b01673
a self-consistent sonification method to translate amino acid sequences into musical compositions and application in protein design using artificial intelligence
chi-hua yu et al. 2019
doi.org/10.1021/acsnano.9b02180
a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.
Although it's not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein's sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.
The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein's long sequence of amino acids then becomes a sequence of notes.
While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. "That's a beta sheet," he might say, or "that's an alpha helix."
Learning the language of proteins
The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. "They have their own language, and we don't know how it works," he says. "We don't know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don't know the code."
By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions -- pitch, volume, and duration -- Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.
The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins -- for example of one found in spider silk, one of nature's strongest materials -- thus making new proteins unlike any produced by evolution.
Although the researchers themselves may not know the underlying rules, "the AI has learned the language of how proteins are designed," and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are "trillions and trillions" of potential combinations, he says, when it comes to creating new proteins "you wouldn't be able to do it from scratch, but that's what the AI can do."
"Composing" new proteins
By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. "No other method comes close," he says. "The shortcoming is the model doesn't tell us what's really going on inside. We just know it works."
This way of encoding structure into music does reflect a deeper reality. "When you look at a molecule in a textbook, it's static," Buehler says. "But it's not static at all. It's moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter."
The method does not yet allow for any kind of directed modifications -- any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. "You still need to do the experiment," he says. When a new protein variant is produced, "there's no way to predict what it will do."
The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. "There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform," Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.
The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.
abstract We report a self-consistent method to translate amino acid sequences into audible sound, use the representation in the musical space to train a neural network, and then apply it to generate protein designs using artificial intelligence (AI). The sonification method proposed here uses the normal mode vibrations of the amino acid building blocks of proteins to compute an audible representation of each of the 20 natural amino acids, which is fully defined by the overlay of its respective natural vibrations. The vibrational frequencies are transposed to the audible spectrum following the musical concept of transpositional equivalence, playing or writing music in a way that makes it sound higher or lower in pitch while retaining the relationships between tones or chords played. This transposition method ensures that the relative values of the vibrational frequencies within each amino acid and among different amino acids are retained. The characteristic frequency spectrum and sound associated with each of the amino acids represents a type of musical scale that consists of 20 tones, the “amino acid scale”. To create a playable instrument, each tone associated with the amino acids is assigned to a specific key on a piano roll, which allows us to map the sequence of amino acids in proteins into a musical score. To reflect higher-order structural details of proteins, the volume and duration of the notes associated with each amino acid are defined by the secondary structure of proteins, computed using DSSP and thereby introducing musical rhythm. We then train a recurrent neural network based on a large set of musical scores generated by this sonification method and use AI to generate musical compositions, capturing the innate relationships between amino acid sequence and protein structure. We then translate the de novo musical data generated by AI into protein sequences, thereby obtaining de novo protein designs that feature specific design characteristics. We illustrate the approach in several examples that reflect the sonification of protein sequences, including multihour audible representations of natural proteins and protein-based musical compositions solely generated by AI. The approach proposed here may provide an avenue for understanding sequence patterns, variations, and mutations and offers an outreach mechanism to explain the significance of protein sequences. The method may also offer insight into protein folding and understanding the context of the amino acid sequence in defining the secondary and higher-order folded structure of proteins and could hence be used to detect the effects of mutations through sound.
the sound produced by a dripping tap is driven by resonant oscillations of an entrapped air bubble
samuel phillips et al. 2018
doi.org/10.1038/s41598-018-27913-0
photosynthesis by marine algae produces sound, contributing to the daytime soundscape on coral reefs
simon e. freeman et al. 2018
doi.org/10.1371/journal.pone.0201766
near-surface environmentally forced changes in the ross ice shelf observed with ambient seismic noise
j. chaput et al. 2018
doi.org/10.1029/2018GL079665
readdle documents (ios)
Movie viewer: .3gp, .l16, .m3u, .m4v, .mm, .mov, .mp4, .scm, .avi, .mkv, .flv;
youtube
media grabber
https://github.com/mediagrabber/ios-workflow/blob/master/media%20grabber.wflow
yet another youtube video/music download shortcut
by
u/arachno7
https://www.reddit.com/r/shortcuts/comments/9pqkfo/yet_another_youtube_videomusic_download_shortcut/
ultimate downloader
https://routinehub.co/shortcut/368
infuse 4
download youtube videos
icab mobile can download some videos
long press on the video
using workflow ios app
reddit thread
https://workflow.is/workflows/adf45a1786ff40efb21f0a2765dae3ea
firefox (mac)
Link: mozilla.org/en-US/firefox
nplayer, splayer not tried
usb cable transfer (quick)
goodreader usb
goodreader.net/usb/
yamaha cp300 (old)
charge ipad 1 passthrough on irig midi
musescore.com
musescore.com
crypt of the necrodancer amplified
apps.apple.com/gb/app/necrodancer-amplified/id1445623416
archive
mimi (ios)
Link: itunes.apple.com/us/app/mimi-music-sound-made-for/id1055611099
caesium (ios)
Link: itunes.apple.com/gb/app/cesium-music-player/id924491991
midi
23 free–mac download a midi file and play it with just the computer while displaying sheet music. if you want to play the midi through to the piano, then can download the free version of synthesia and it will do it for you.
aria maestosa and musescore free–mac midi players and editors
AudioKit free apps
audiokitpro.com
synth one
itunes.apple.com/gb/app/audiokit-synth-one-synthesizer/id1371050497
FM player: classic dx synths
itunes.apple.com/gb/app/fm-player-classic-dx-synths/id1307785646
song recording, editing
garageband
logic pro (mac)
make ringtone
Link: youtube.com/watch
logic remote manual
help.apple.com/logicremote/ipad/1.0/#
audacity (mac)
web.audacityteam.org/
apps not yet tried
airfoil
rogueamoeba.com/airfoil/mac/
airserver
mirror ios to mac using quicktime and lightning cable
just press record
itunes.apple.com/gb/app/just-press-record/id1033342465?mt=8
auphonic automatic audio post production web service
auphonic.com/pricing
ferrite for recording audio and editing on ios
wooji-juice.com/products/ferrite/
tutorials
customisable keyboard shortcuts
recording podcasts howto
youtu.be/0K4M7dy1Ehg
imazing
imazing.com/
education discount
imazing.com/store/educational
can transfer some but not most video formats
anatomy of the voice: an illustrated guide for singers, vocal coaches, and speech therapists
theodore dimon 2018
why you love music: from mozart to metallica—the emotional power of beautiful sounds
john powell 2017
sound: a story of hearing lost and found
bella bathurst 2018
the music instinct: how music works and why we can’t do without it
philip ball 2010
the sound book: the science of the sonic wonders of the world
trevor cox 2014
i can hear you whisper: an intimate journey through the science of sound and language
lydia denworth 2014
principles of violin playing and teaching
ivan galamian 1962
speakers
busking
busk.co/blog/
for me, practicing and playing piano is like meditation or yoga or running — serenely in the moment
how I learn piano pieces
i trust that i will learn the piece, even if it seems daunting at first; i know that with spaced recall and starting from the end, i will eventually learn to play it from recall and by touch
learning a piano piece by spaced recall of music notation (traditional notation or synthesia). goal is final ability to perform piece by touch and sound, with neither notation or vision
enunciate consonants using just your tongue and the back of your top front teeth. it is possible to do this with all the consonants, even ‘b’
human sound systems are shaped by post-neolithic changes in bite configuration
d. e. blasi et al. 2019
doi.org/10.1126/science.aav3218
they did not seem to consider that fricatives can be made just with upper teeth and tongue
While the teeth of humans used to meet in an edge-to-edge bite due to their harder and tougher diet at the time, more recent softer foods allowed modern humans to retain the juvenile overbite that had previously disappeared by adulthood, with the upper teeth slightly more in front than the lower teeth. This shift led to the rise of a new class of speech sounds now found in half of the world's languages: labiodentals, or sounds made by touching the lower lip to the upper teeth, for example when pronouncing the letter "f."
"In Europe, our data suggests that the use of labiodentals has increased dramatically only in the last couple of millennia, correlated with the rise of food processing technology such as industrial milling," explains Steven Moran, one of the two co-first authors of the study. "The influence of biological conditions on the development of sounds has so far been underestimated."
Interdisciplinary approach to verify hypothesis
The project was inspired by an observation made by linguist Charles Hockett back in 1985. Hockett noticed that languages that foster labiodentals are often found in societies with access to softer foods. "But there are dozens of superficial correlations involving language which are spurious, and linguistic behavior, such as pronunciation, doesn't fossilize," says co-first author Damián Blasi.
In order to unravel the mechanisms underlying the observed correlations, the scientists combined insights, data and methods from across the sciences, including biological anthropology, phonetics and historical linguistics. "It was a rare case of consilience across disciplines," says Blasi. What made the project possible was the availability of newly developed, large datasets, detailed biomechanical simulation models, and computationally intensive methods of data analysis, according to the researchers.
Listening in on the past
"Our results shed light on complex causal links between cultural practices, human biology and language," says Balthasar Bickel, project leader and UZH professor. "They also challenge the common assumption that, when it comes to language, the past sounds just like the present." Based on the findings of the study and the new methods it developed, linguists can now tackle a host of unsolved questions, such as how languages actually sounded thousands of years ago. Did Caesar say "veni, vidi, vici" -- or was it more like "weni, widi, wici'"?
abstract INTRODUCTION
Human speech manifests itself in spectacular diversity, ranging from ubiquitous sounds such as “m” and “a” to the rare click consonants in some languages of southern Africa. This range is generally thought to have been fixed by biological constraints since at least the emergence of Homo sapiens. At the same time, the abundance of each sound in the languages of the world is commonly taken to depend on how easy the sound is to produce, perceive, and learn. This dependency is also regarded as fixed at the species level.
RATIONALE
Given this dependency, we expect that any change in the human apparatus for production, perception, or learning affects the probability—or even the range—of the sounds that languages have. Paleoanthropological evidence suggests that the production apparatus has undergone a fundamental change of just this kind since the Neolithic. Although humans generally start out with vertical and horizontal overlap in their bite configuration (overbite and overjet, respectively), masticatory exertion in the Paleolithic gave rise to an edge-to-edge bite after adolescence. Preservation of overbite and overjet began to persist long into adulthood only with the softer diets that started to become prevalent in the wake of agriculture and intensified food processing. We hypothesize that this post-Neolithic decline of edge-to-edge bite enabled the innovation and spread of a new class of speech sounds that is now present in nearly half of the world’s languages: labiodentals, produced by positioning the lower lip against the upper teeth, such as in “f” or “v.”
RESULTS
Biomechanical models of the speech apparatus show that labiodentals incur about 30% less muscular effort in the overbite and overjet configuration than in the edge-to-edge bite configuration. This difference is not present in similar articulations that place the upper lip, instead of the teeth, against the lower lip (as in bilabial “m,” “w,” or “p”). Our models also show that the overbite and overjet configuration reduces the incidental tooth/lip distance in bilabial articulations to 24 to 70% of their original values, inviting accidental production of labiodentals. The joint effect of a decrease in muscular effort and an increase in accidental production predicts a higher probability of labiodentals in the language of populations where overbite and overjet persist into adulthood. When the persistence of overbite and overjet in a population is approximated by the prevalence of agriculturally produced food, we find that societies described as hunter-gatherers indeed have, on average, only about one-fourth the number of labiodentals exhibited by food-producing societies, after controlling for spatial and phylogenetic correlation. When the persistence is approximated by the increase in food-processing technology over the history of one well-researched language family, Indo-European, we likewise observe a steady increase of the reconstructed probability of labiodental sounds, from a median estimate of about 3% in the proto-language (6000 to 8000 years ago) to a presence of 76% in extant languages.
CONCLUSION
Our findings reveal that the transition from prehistoric foragers to contemporary societies has had an impact on the human speech apparatus, and therefore on our species’ main mode of communication and social differentiation: spoken language.