With our mouths, humans produce sounds beyond speech that can give clues about our physical and mental health. Although we don’t usually think much about them, they contain useful information for diagnosing conditions.
“People are generators of sounds and these can be very helpful for medicine,” explains Norberto Naal-Ruiz, a doctoral student at the School of Engineering and Sciences at Tec de Monterrey, in an interview with TecScience.
This is a discipline that studies the auditory system and its influence on the functions of the nervous system, such as the brain. One of its many applications is to detect anomalies in the sounds produced by the mouth to diagnose conditions or follow up on treatment.
Mouth Sounds to Diagnose Disease
In Mouth Sounds: A Review of Acoustic Applications and Methodologies, the team found that the analysis of these sounds has been primarily used to monitor the physical and mental state of patients around the world.
Some of the sounds we make with our mouths that can be altered due to different conditions are the ones we make when breathing or sleeping, also the way we pronounce vowels, or our cries, snoring, babbling, or whistling.
“In Parkinson’s, for example, it has been found that due to the motor instability that characterizes the disease, there are fluctuations in the sounds produced by the mouth,” explains Naal-Ruiz.
In the study of this disease, it has been observed that if these sounds are monitored, they can be detected in early stages and thus contribute to timely treatment that counteracts its effects.
These measurements provide a complement to other diagnostic methods that can be used in respiratory complications, such as those caused by COVID-19, sleep disorders, such as apnea, or neurological disorders, such as schizophrenia and cerebral palsy.
Prosody is the rhythm and melody of speech, which varies in the accent and intonation of each person. In schizophrenia, for example, one of its symptoms is abnormal prosody, with intonations and accents characteristic of the condition.
It can also be used in emotional disorders such as post-traumatic stress or depression, and to detect emotions like fear, anguish, sadness, or surprise in psychotherapy sessions, for example.
In conjunction with other tools, monitoring the sounds we produce with our mouth can help monitor the treatment of any of the mentioned conditions.
Standardized Methodologies: The Missing Piece
Although there are many applications of the analysis of these sounds, in the study, the researchers warn that there is still a need to standardize the methodology to record and process them, since they tend to vary between countries or study groups.
There must be a space where outside sounds can be isolated to prevent external noise from sneaking and causing confusion in its analysis. The types of microphones used must also be specialized in voice detection so that the frequency range captured is adequate.
“In spaces such as hospitals or clinics, which are very busy, techniques must be implemented to isolate environmental sounds,” explains Naal-Ruiz, who came to the group from digital music production.
Technical aspects, such as the distance at which the person is placed from the microphone or the volume at which the recording is done must also be standardized, so that medical institutions can approve them as diagnostic or monitoring tools.
To contribute to the field, in the future, this group of researchers seeks to study the oral sounds that characterize specific conditions and create a standardized methodology for their recording and extraction, so it can be used by other groups interested in the field.
“We want to promote the research of the anomalies in these sounds, so we need people with that intellectual curiosity,” invites Naal-Ruiz. In his group, there’s a mix of researchers: doctors, psychologists, biomedical engineers, and music producers. “Together we can combine skills and tools that each researcher has,” he says.