Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
Hearing aids have gotten much better over the last decade or so. They've also become more accessible, with over-the-counter options giving people an easy way to try them out. Now AI has entered the picture, delivering improvements that enhance desirable sounds and de-emphasize noise. More advanced models could compensate for neural patterns disrupted by hearing loss, more fully restoring what the brain loses when hearing is impaired. Scientists we spoke to described the advances as a step change: Before AI, hearing aids helped make the world louder; now they can help screen for the parts of it that matter.
Also in this issue:
Thanks for reading!
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
What's That? The Sound of AI in Hearing Aids
Michael Preuss, a senior audiology manager at Sonova, a hearing aid company, has had severe to profound hearing loss since he was three years old. Over the decades, he says, his life has been transformed by “nerds" from across the hearing aid industry, as he puts it, who have built increasingly sophisticated technology.
Their advances have made a difference to millions of people. New approaches to sound processing mean that hearing aids can compensate for hearing loss across different parts of the sound frequency spectrum, rather than simply amplifying everything. Directional microphones make it easier to focus on sounds directed at the wearer, improving conversations. Bluetooth connectivity makes phones far easier to use.
Now a new shift is underway. The latest generation of hearing aids uses artificial intelligence to amplify important sounds while suppressing background noise, the kind of advance that could make following a voice in a noisy room significantly easier.
Yi Shen, an associate professor at the University of Washington who specializes in hearing loss and machine learning, said AI systems running on powerful computers have been able to process sound in this way for about a decade. The challenge has been translating this ability to devices that can sit behind an ear, run on a tiny battery and function without any perceptible delays.
A handful of companies now say they've cracked it. By training AI models, compressing them to run on compact hardware and designing power-efficient chips, companies such as established hearing aid makers Phonak and ReSound, along with Fortell, an upstart, have introduced products that deliver meaningful improvements in noisy environments.
More than 400 million people with hearing loss could benefit from hearing aids, but only 17 percent of those use them. That is starting to change: In 2022, the FDA opened the door to over-the-counter hearing aids, prompting companies such as Sennheiser, Sony and Apple to develop products. Apple's AirPods Pro, at $249, now include a hearing-aid mode that, while not custom-fitted or professionally calibrated, gives users a sense of what hearing assistance can do.
Experts Aventine spoke to see the increased availability of over-the-counter options as positive, lowering the barrier of cost and reducing stigma. And the benefits of wider access to hearing aids go beyond better conversations: A trial published in The Lancet in 2023 found that hearing aid use slowed cognitive decline by as much as 48 percent in older adults at high risk for dementia. Smarter hearing aids could have a profound impact on lived experience, and even other medical outcomes. “We need to create greater benefit than that [which] has been there before,” said Preuss.
How AI can isolate speech amid noise
If you show an AI enough pictures of cats, it will develop the ability to identify them. In the same way, an AI trained on enough examples of human voices and noise can learn to isolate speech from a noisy recording. That is what AI-enabled hearing aids promise: to amplify desirable signals, especially voices, so they stand out from background noise.
To do this, engineers create paired datasets, with clean recordings of someone speaking, and busier versions with different types and levels of background sound layered on top. A deep neural network is then trained to distinguish the speech from the noise. The resulting model can be run on a powerful computer to process sounds relatively straightforwardly. But neural networks are computationally demanding and — to be used with hearing aids — must be scaled down to run on a device small enough to sit behind the ear. They also have to be fast: FDA guidance says the lag between a sound and the processed output should be less than 15 milliseconds. Any more, and the experience becomes jarring for the person wearing the hearing aid.
Two developments have made this feasible. One is a streamlining of the AI models. Shen explained that the mathematical precision in a neural network can often be reduced without sacrificing too much performance, a process called quantization. The other is hardware. Specialized processors can be optimized for a particular task, making them faster and more power-efficient than general-purpose alternatives. Fortell, based in New York, and Phonak, a subsidiary of Sonova in Switzerland, have developed custom chips for their AI-enabled devices. ReSound, a hearing aid brand that is part of GN Hearing, takes a slightly different approach, using off-the-shelf AI chips.
All three companies now sell hearing aids that run AI directly on the device. A pair typically costs between $4,000 and $7,000. These are prescription devices that differ in important ways from over-the-counter devices, including but not limited to the fact that they are fitted and tuned by an audiologist to provide highly tailored hearing improvement; they use far more sophisticated algorithms; and can, under FDA rules, provide greater amplification for those with mild to profound hearing loss.
Hearing the difference
These devices all work in subtly different ways. AI is typically used alongside traditional signal-processing techniques, and manufacturers make different choices about how to combine them. A sound enhancement technique known as beamforming, for instance, uses timing differences between two microphones to determine which direction sounds around the wearer are coming from, allowing the device, for instance, to prioritize sounds ahead. This is the method both ReSound and Phonak use to isolate sound directed at the listener. Fortell says its AI model achieves something similar without a separate beamforming algorithm. Other systems classify the sound environment — a restaurant, a concert, a living room — and adjust audio parameters accordingly. Some companies offer hearing aids that automatically adjust to new sound environments while others engage users more directly in the process.
How effective are these systems? This is where things get complicated, as there is no universal standard for testing and comparing hearing aids. Manufacturers run their own tests, typically placing a speech source in one location and noise sources in others. But the position of speakers, the kind of noise that is used and the configuration of environments can favor one device over another.
Laurel Christensen, chief audiology officer at GN Hearing, offers an example. In some settings, one ReSound hearing aid will default to an omnidirectional mode while the other will use beamforming. This preserves 360-degree awareness while also enhancing speech (somewhat like bifocal eyeglasses that allow both near and distance vision from different parts of the lens). The company argues this is more useful. But a test designed to reward a hearing aid that prioritized only sound from in front of the wearer would penalize this device.
Regardless of the approach, the new hearing aids do seem effective at improving hearing in noisy environments. Phonak found that its AI approach increased speech intelligibility by 50 percent over its previous methods while reducing listening effort. More anecdotally, Fortell’s marketing includes videos of its technology being used in noisy locations such as restaurants and Grand Central Station; the voice isolation feature can be toggled on and off and the effect is striking.
Fortell has also, in experiments with NYU Langone, reported a 9.2-decibel improvement in signal-to-noise ratio compared to what it calls “the leading premium AI hearing aid” — an increase that corresponded to understanding 10 times more words amid challenging, multi-talker noise. Given the lack of standardized testing, such claims about the performance of one cutting-edge device over another warrant caution. And to some extent, hearing aid choice also comes down to personal preference: Some users value aggressive noise suppression while others find it sterile. “As a hearing aid wearer, I still want to feel that I'm in a restaurant,” said Preuss. “I don't want to feel like I'm in a glass bubble.”
The start of something bigger
The results of current AI approaches are “extremely impressive…at pulling speech out of noise,” said Nicholas Lesica, a professor of auditory neuroscience at University College London. But he also thinks they aim to help with only one part of the problem — hearing speech. “This approach tries to get around dealing with the full complexities of hearing loss by instead helping the user with a specific task,” he wrote in an email to Aventine, arguing that it’s only part of the problem.
As the ear deteriorates, he explained, things don't just get quieter: Physical changes distort the information being sent to the brain and amplification alone can't fully solve that problem. “The ear essentially sends a barcode to the brain to identify each sound it hears,” he said. “After hearing loss, the black and white bits of the bar code get scrambled. Making the sound louder might make the barcode clearer, but it's still the wrong code.”
His lab is exploring whether a deeper understanding of the brain's response to sound could lead to better hearing aids. Working with gerbils, whose hearing resembles that of humans, his team has studied brain activity before and after hearing damage to build AI models that manipulate sound to recreate original neural patterns. The lab successfully recreated healthy neural patterns in animals with impaired hearing and — along with Perceptual Technologies, a startup focused on commercializing this research — secured funding to translate the approach for humans.
Elsewhere, Dhruv Jain, a professor of computer science and engineering at the University of Michigan who has hearing loss, is exploring how AI might deliver richer streams of information. Hearing aid companies can be “pretty narrow” in their thinking, he said, and his lab is developing approaches that go beyond simply cleaning up audio. An early project called SoundWatch used AI-based audio detection to identify specific sounds — a microwave ping, a baby crying, a doorbell — and alert the wearer via their smartwatch to the sounds in real time.
As Christensen sees it, continued advances in chips and battery performance will allow more powerful AI models to improve hearing aids over time. As those enhancements roll out, the capabilities emerging today may come to look like only a first step. Eventually, the work of researchers like Lesica and Jain could be used to enhance today's hearing aids.
That shouldn't be seen as a criticism of current advances. For a long time, hearing aids helped make the world louder. Now, AI is making them better at screening for which parts of it matter. And “it's just starting,” said Preuss.
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Advances That Matter
Scientists are trying to build more versatile vaccines. The idea behind vaccination is straightforward: Teach the immune system to recognize a virus before the real thing shows up. But for many viruses — including influenza and coronaviruses — the parts of the virus used in vaccines are also the parts most likely to mutate, allowing future generations of the pathogen to slip past the immune system. The Economist reports that researchers are trying to design vaccines that are more versatile. One strategy is to target viral features that mutate less often. A team at Caltech has developed an experimental vaccine built from fragments of eight subtly different viruses, which prompts the immune system to focus on what those viruses have in common rather than on the parts that vary. Early mouse studies suggest that the approach could offer broader protection than current options. Another approach attempts to keep parts of the immune system on constant high alert against infection instead of priming the body against a particular pathogen. Inspired by the century-old BCG vaccine — which reduced deaths from unrelated infections by boosting the body’s fast-action, general immune defenses — a team at Stanford University has shown that a harmless antigen can prime the lungs of mice to remain on high alert against infection for months. Neither approach is yet ready to replace traditional vaccines, and mouse results often don't translate to humans. But researchers are increasingly seeing a path toward building vaccines that are less specific and, as a result, far more robust.
Intel and TSMC are rushing to expand US chip-packaging capacity. “Chip packaging” may sound like stacking CPUs in a box ready to be shipped. It's not. It’s an increasingly important part of semiconductor manufacturing that has become a bottleneck in cutting-edge chip production. The term refers to the ever more sophisticated process of assembling multiple small parts — sometimes in two dimensions, increasingly in three — into a single high-performance system. That allows companies to place components such as CPUs, GPUs and high-bandwidth memory much closer together than before, improving performance while reducing power use — a dynamic that has become increasingly important in AI hardware. Currently, though, almost all of this capability is in Asia, and Nvidia has reserved most of the capacity available from Taiwan’s TSMC, the current market leader. Now, as Wired reports, Intel has revived dormant fabrication facilities in Rio Rancho, New Mexico, and is investing billions of dollars in turning them into advanced-packaging hubs. At the same time, TSMC is breaking ground on two new packaging-focused plants in Arizona. Companies including Amazon, SpaceX and Tesla are reportedly already in talks with Intel to tap into its emerging capabilities. Interestingly, customers can use Intel’s facilities for packaging even if they purchased components elsewhere. That’s a departure from the end-to-end approach Intel took in the past, and a sign that the AI chip race is becoming more complex as it evolves.
AI can help states grapple with the Colorado River’s decline. The Colorado River is in trouble. River flows are down roughly 20 percent from 2000 levels, Lake Powell is at risk of losing hydropower capacity and negotiations over how states should share the shrinking water supply have repeatedly broken down. AI cannot fix the politics or produce water. But, as IEEE Spectrum reports, it may help officials navigate the complexity of managing the crisis. The Bureau of Reclamation, for instance, has developed AI systems that can predict how much water will be flowing through the river at a particular time (also known as streamflow forecasting) more accurately than traditional methods. These forecasts now update hourly and can predict flood risks as much as seven days in advance, compared with only three days previously. A team at Metropolitan State University of Denver is using data from NASA satellites rather than ground-based gauges to predict drought conditions months ahead. And researchers at Utah State University are building models that can trace how changing conditions at one point in the river system ripple downstream to affect other regions. One notable wrinkle is that models trained on the full historical record of the river often perform worse than those trained only on the last decade’s worth of data because the river’s recent behavior — with prolonged droughts caused by climate change— no longer resembles the conditions during much of the 20th century. AI systems will not answer tough questions like who gets less water, and when. But better forecasting could make the tradeoffs clearer to policymakers who have to face up to unpleasant realities.
Magazine and Journal Articles Worth Your Time
My Quest to Solve Bitcoin’s Great Mystery, from The New York Times
12,000 words, or about 50 minutes, or you can listen here
Who is Satoshi Nakamoto, the presumably pseudonymous creator of Bitcoin? Seventeen years after Bitcoin was introduced, the true identity of its creator remains one of the most tantalizing mysteries in technology. Now John Carreyrou, best known for exposing fraud at Theranos, has taken a run at it. The fruits of his investigation deliver a delightful read if you like solving puzzles. His approach combines old-school reporting, AI tools and a lot of grammar. He digs into Satoshi’s writing style, studying punctuation, phrasing and even misplaced hyphens, while also analyzing early activity on the cryptography mailing lists where Bitcoin first took shape. Along the way, he brings in forensic linguists and uses AI tools to sift through large volumes of text in search of patterns. All of that work leads him to one person: Adam Back, a British computer scientist and early figure in the crypto world. Carreyrou builds a case that Back’s writing style, technical background and online activity line up with what is known about Satoshi. Back, for his part, flatly denies the association. But in doing so, he seems to give away even more clues that appear to confirm Carreyrou’s reporting. You’ll need to read the story yourself to make up your mind.
A bad crowd, from Science
3,000 words, or about 12 minutes
For decades, cancer researchers have largely assumed that metastasis — the spread of cancer responsible for around 90 percent of deaths — was driven by lone cells breaking away from a primary tumor and seeding new tumors elsewhere in the body. This story describes how that thinking is starting to shift. In many cases, it turns out, these rogue cancer cells don't travel solo, but instead in packs. Researchers are finding that clusters of up to 100 cells can detach from a tumor, enter the bloodstream and establish new growths. The bad news is that groups of cells are better able to survive immune attack and being jostled by bloodflows and, as a result, are estimated to be 50 to 100 times more likely to successfully form metastases than single cells. The good news is that this shift in understanding is also opening up new approaches to tackling cancer by targeting the clusters. One example uses digoxin, a drug traditionally used to treat heart conditions, which appears to weaken the bonds between cancer cells and has shown early promise in shrinking clusters in breast cancer patients. Another involves clot-busting drugs that target the proteins helping these cell groups stick together. It is not yet clear how well these approaches will translate into widely used treatments. But better understanding the roots of metastasis could help slow or prevent one of cancer’s nastiest behaviors.
Scientists invented a fake disease. AI told people it was real, from Nature
2,400 words, or about 10 minutes
“Bixonimania, a recently identified dermatological condition characterized by periorbital hyperpigmentation, is hypothesized to be linked to blue light exposure,” reads the first line of a pre-print academic paper published online in spring 2024. It sounds like a real condition. It isn’t. It was invented by a medical researcher who published two obviously fake academic papers in 2024 to discover whether AI systems would absorb and repeat fabricated medical information as if it were true. According to this Nature story, they did, and very enthusiastically. ChatGPT, Google Gemini, Microsoft Bing Copilot and Perplexity all reportedly described the fictional disease with confidence, including made-up symptoms and prevalence. The fake papers also found their way into peer-reviewed literature, suggesting that some researchers are allowing AI-generated references to seep into their work without checking the original sources. The hoax was not subtle. The manufactured research came from a made-up university, thanked the “Professor Sideshow Bob Foundation” for funding, cited guidance from the crew of the USS Enterprise and explicitly stated that “this entire paper is made up.” The problem is that once bad information is ingested by AI, a model can launder it into something that looks legitimate, and — at least for now — there’s no way to consistently prevent fraudulent research from mixing with legitimate work. While we now know that Bixonimania is fake, it’s an open question how many other deceptions are in circulation.