Newsletter / Issue No. 63

Image by Ian Lyman/Midjourney.

Newsletter Archive

Thu 19 Mar, 2026
navigation btn

Listen to Our Podcast

Dear Aventine Readers,

Catastrophic narratives about AI's effect on jobs, education and the future of humanity itself have been flooding the media. You've probably read about them here! And while authors say their intention is to prepare people and prompt action, an IV drip of doom may have the opposite effect. Read on to learn about the history of AI doomerism and why it's so powerful.

Also in this issue:

  • A physics breakthrough could change cancer treatment.
  • We may be able to stop lightning that sparks wildfires. Should we?
  • A new, inference-only AI chip promises to use 90 percent less energy.
  • And, from New Scientist, the real reasons birth rates are declining worldwide.
  • As always, thanks for reading,

    Danielle Mattoon
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    The Big Idea

    AI Doomerism Has Gone Mainstream

    A sense of collective panic over a world being reshaped by AI has become an unavoidable part of modern life.

    In recent weeks, you may have read Matt Shumer’s essay “Something Big Is Happening” about AI’s impact on jobs, or Citrini Research’s fictional analyst note, “The 2028 Global Intelligence Crisis,” about AI’s ability to crash financial markets. Perhaps you followed the OpenClaw debacle, in which a chunk of the tech community briefly convinced itself that AI agents were on the cusp of developing bot-only languages humans wouldn’t understand. Other prognostications about how AI will overwhelm, overpower or otherwise undo us will certainly appear. 

    Each of the stories above earned plenty of rebuttals in the hours and days after they became news. Many argued that Shumer’s essay, for instance, told only half the story and that Citrini’s note messed up its economic theory. The threat of OpenClaw bots building a language to plot against humans turned out to be fake. But counterclaims, context and corrections don’t take hold like predictions of an imminent threat. 

    The question is, though, how useful is doom-mongering? Is it a useful warning siren or an invitation to resign ourselves to unwanted outcomes?

    The power of pessimism 

    Pessimistic narratives around AI can trace their origins back at least 60 years, to Irving John Good’s 1966 paper, “Speculations Concerning the First Ultraintelligent Machine.” Since then, they have largely existed in niche communities like philosophy circles and online message boards, said John Danaher, a senior lecturer at the University of Galway in Ireland who specializes in legal philosophy, emerging technologies and the future of human society. But as “models increase in their apparent capabilities and powers,” he said, “people are kind of picking [the idea] up off the shelf and running with it.”

    Negative thinking around AI exists on a spectrum, he added. At one extreme sits classical “doomerism” associated with thinkers like Nick Bostrom and Eliezer Yudkowsky, who have warned about existential risk and even human extinction at the hands of AI since the early 2000s. But between extinction risk and enthusiasm for the technology are a wide range of other concerns: economic fears about job losses and financial instability; political worries about AI-enabled warfare, mass surveillance and psychological manipulation; and social harms such as deepfake pornography and AI’s role in self-harm and suicide. These concerns feel more immediate not only because they’re easier to imagine than extinction, but because some version of them is already real — particularly economic fears for white-collar workers, who find themselves increasingly using AI.

    The fact that highly pessimistic AI narratives are beginning to take hold isn’t too surprising for several reasons, said Ravi Sreenath, co-founder and CEO of Ripple Research, which analyzes online narratives that shape public behavior. Extreme voices get more traction than mild ones, he noted. And the audience is ripe for persuasion: Most people have heard plenty about AI but only a minority are deeply immersed in it. And many of the arguments used in these sorts of narratives are highly emotive and difficult to disprove. “Your jobs will be lost because of AI, or AI is coming for your kids' futures,” he said. “This is something that you can't really debunk.”

    Neuroscience also helps explain why negative storylines capture the imagination. Tali Sharot, a professor of cognitive neuroscience at University College London and MIT explains that in situations of high ambiguity and stress — such living through the dawn of a new technology that could transform our lives — humans gravitate toward pessimism. We are also more likely to be pessimistic when we don’t feel a strong sense of agency, she said. 

    Moving on from doom

    Pessimism can get our attention and help us think through what can go wrong, said Danaher, but it can also cause people to disengage. “In general, our brain is actually wired in a way where the expectation of a positive thing elicits action and the expectation of negative actually elicits inaction,” said Sharot. Confronted by dystopian views of the impact of AI on society, people may simply give up. “You shut down, you live in a cave, you throw your hands up and say, ‘Well, there's nothing I can do,’” said Danaher.

    Researchers are already concerned about this happening around AI. “We sometimes sit too comfortably with two fatalist narratives,” said Alondra Nelson, a professor at the Institute for Advanced Study, during a talk at the recent International Association for Safe & Ethical AI conference in Paris. "'It's moving so fast, the technology, what can we do?' and the other that says, 'We have no idea this [works], there's nothing that we can do’ … Both of these lead us down a road to paralysis." 

    The authors of viral pessimistic posts have argued that they aren’t trying to frighten people. “I didn’t put this out to scare people,” Shumer told New York magazine. “My goal is to help people see what they might be neglecting … because they should be able to know and make their own decisions for how to prepare or not prepare.” The authors of the Citrini note wrote that they hoped readers would feel “more prepared for potential left tail risks as AI makes the economy increasingly weird.”

    But if the intention is to stir action, Sharot and Danaher both argued that highlighting the downside alone won’t achieve it. Rather than focusing only on what might go wrong, they said, authors might try to describe what a better outcome looks like and crucially, how we might move toward it. “You want to say what that [positive outcome] is, and then you want to be very explicit about the actions that you need to take to get us there,” said Sharot. Shumer’s post gestured at coping strategies — use AI more, plan finances for a world with less work — but offered no insight into how to shape our collective future. Citrini proposed no positive path forward at all. If AI alarmists genuinely want to help people navigate what’s coming, that will need to change.

    Without a credible route to improvement, doom can be self-reinforcing. Under threat or stress, Sharot said, people become hyper-vigilant about negative information, often creating a feedback loop that drives them to seek out more negative content. And AI pessimism doesn’t exist in isolation, Danaher pointed out. It sits atop other forces that many people already find destabilizing: the rise of right-wing populism, climate anxiety and active military conflicts. 

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    Quantum Leaps

    Advances That Matter

    A physics breakthrough could change cancer treatment. In particle physics labs around the world — including CERN near Geneva, SLAC in Menlo Park and the Photo Injector Test Facility outside Berlin — researchers are developing a radical new form of cancer treatment. Known as FLASH radiotherapy, the technique delivers an ultrahigh dose of radiation to a tumor in less than a tenth of a second. As IEEE Spectrum reports, the dose is typically at least five times stronger than that used in conventional radiation therapy, yet appears to destroy only cancer cells while largely sparing surrounding healthy tissue, unlike traditional approaches. Exactly why this happens remains unclear, though one leading hypothesis is that cancerous cells may process reactive oxygen species — unstable molecules created during exposure to radiation — in different ways than regular tissue during this ultrafast treatment. Whatever the mechanism, results from animal studies and some early human tests suggest the effect is real. Turning the approach into a practical medical treatment, though, is a major engineering challenge: Producing radiation beams powerful enough for FLASH therapy requires large particle accelerators, machines rarely found outside research labs. A collaboration between CERN scientists, Lausanne University Hospital and the French medical technology company Theryq hopes to change that by developing systems that could be installed in hospitals. So far, the team has built a prototype capable of treating tumors on the surface of the body and has begun early clinical trials. Their next goal is a more powerful system capable of targeting tumors up to 20 centimeters deep. Researchers predict it could take a decade for the technology to mature and gain regulatory approval, but if it succeeds, FLASH radiotherapy could transform how cancerous tumors are treated. 

    We may be able to stop lightning that sparks wildfires. Should we? A Vancouver startup called Skyward Wildfire says it may be able to reduce wildfire risk by suppressing lightning strikes — the cause of roughly 60 percent of Canada’s wildfires during its record-breaking 2023 fire season. The company recently raised $5.7 million to help pursue that goal. An investigation by MIT Technology Review examines how the technology works and whether the claims hold up. Skyward has been secretive about its methods, but documents reviewed by the magazine suggest that the company releases fiberglass strands coated with aluminum, known as chaff, into storm clouds to interfere with the way electrical charge builds up in the atmosphere. The material is borrowed from the military, where it is released from fighter jets to confuse guided missiles, and some US agencies decades ago found that it could affect lightning strikes. Skyward says field tests conducted in 2024 showed a 60 to 100 percent reduction in lightning when the technique was used compared to when it was not. But scientists remain cautious. Researchers who have studied lightning suppression say the underlying idea may have some merit, but earlier experiments relied on small datasets and weak methodology. Unpublished analysis by researchers from New Mexico Tech found that storms containing chaff actually appeared to produce more total lightning, not less.There are also major unanswered questions. What environmental effects might come from dispersing large quantities of chaff into the atmosphere? And should private companies conduct weather-modification experiments with little transparency? Like proposed geoengineering approaches to climate change, lightning suppression is an intriguing technological solution, but the science may not yet justify the confidence behind the claims.

    A new, inference-only AI chip promises to use 90 percent less energy. A Toronto-based startup called Taalas says it has developed AI hardware that could dramatically cut the energy required to run artificial intelligence systems. Its chips, the first of which is called HC1, take a different approach from the GPUs that dominate the industry today. The chip cannot be used for the training of AI models as GPUs are; instead, it is designed solely for use during the inference stage of training, when models generate outputs. Taalas hardwires its chip to run a specific AI model, effectively printing circuits tailored to that model directly onto its silicon; the first version is optimized to run Meta’s Llama 3.1 8B. According to the company, the chip is roughly 10 times faster and consumes 10 times less power than current state-of-the-art alternatives, while costing about 20 times less to produce. That sounds incredible — and the claims have not yet been independently verified — but there are significant barriers to adoption even if the chips do what Taalas claims. Major AI labs update their models multiple times per year, raising the question of whether hardware built around a specific model could become obsolete too quickly to justify the investment. The AI chip sector is also controlled overwhelmingly by Nvidia, which occupies roughly 90 percent of the market and whose chips run on its proprietary software platform, CUDA. That ecosystem makes it hard for companies to switch to new hardware. Nvidia has invested heavily in inference-focused technology itself, including licensing technology from the startup Groq. Breaking into the AI hardware ecosystem may prove harder than building the chip itself.

    Long Reads

    Magazine and Journal Articles Worth Your Time

    Buckle Up for Bumpier Skies, from The New Yorker
    8,500 words, or about 33 minutes

    There is turbulence inside clouds, and then there is clear-air turbulence, the invisible kind that forms far from storms and cannot be detected by radar. That’s the sort that struck Singapore Airlines Flight SQ321 in 2024. Passengers were hurled through the cabin as the aircraft encountered violent, chaotic air currents. Of 211 passengers on board, 104 were treated for injuries, 17 required surgery, six suffered skull or brain injuries, one passenger was left paralyzed and one person died. Unfortunately, this sort of turbulence is becoming more common. Studies suggest that between 1958 and 2001, the frequency of clear-air turbulence increased by 40–90 percent over Europe and North America due to climate change. By the middle of this century, moderate or greater turbulence on North Atlantic routes could rise by as much as 170 percent. At the same time, many aircraft certification rules are still based on turbulence data collected in the 1960s. This New Yorker feature explores turbulence from several vantage points: from aboard the world’s most turbulent flight route — Mendoza, Argentina, to Santiago, Chile — which passes over the Andes; inside the National Center for Atmospheric Research where researchers are studying the fluid dynamics of the atmosphere; and at the labs at Boeing, where engineers are attempting to design aircraft that can better reduce the effects of violent air currents. The good news for nervous flyers is that researchers are making progress in designing systems that help aircraft ride out turbulence more smoothly. The bad news is that the most dangerous jolts remain largely unpredictable and, for now, almost impossible to control.

    The real reasons birth rates are declining worldwide, from New Scientist, and Five ways demographics are transforming the world economy, from Tthe Financial Times

    2,200 words, or about 9 minutes, and 2,300 words, or about 9 minutes

    The causes of falling birth rates are usually explained in broad, macro terms: the rising costs of raising children, women’s expanding career opportunities or the availability of contraception. Those forces matter, but they may not tell the whole story. This New Scientist piece explores the issue at a more granular level through research by a University of Oxford researcher who has been studying the reasons why, for every three babies that are wanted by mothers and fathers in the UK, only two are born. Focusing on how different demographic groups think about having kids, she reveals, for example, that women with more education want partners to share in childcare duties while women with less education say they want committed relationships. Some women want to own homes before having children; others rent because they don’t want the debt. The personalized approach suggests that different solutions will be needed for different populations if the slide in fertility rates is to be stopped or slowed. Meanwhile, The Financial Times examines how shrinking populations will reshape the global economy in the decades ahead. Among the likely effects: longer working lives, greater strain on pension and welfare systems, potentially slower economic growth. Some economists hope technologies such as AI could offset falling productivity, but that remains uncertain. Taken together, the two stories describe a complex path forward: The Financial Times points out that stronger economic incentives will likely be required to encourage families to have children, while the New Scientist article illustrates why any such policy will need to be multifaceted if it’s to work effectively.

    Cancer blood tests are everywhere. Do they really work? From Nature
    2,600 words, or about 10 minutes

    A new generation of blood tests promise early detection of the presence — and absence — of dozens of cancers using a single blood sample. As many as 40 of these tests now exist, including commercially available products like Galleri and Cancerguard, which cost somewhere between $600 and $1,000. The idea behind these so-called multi-cancer early detection tests is scientifically sound: They look for traces of tumor DNA or other biomarkers circulating in the bloodstream. Similar techniques are already widely used clinically, to monitor how known cancers respond to treatment. But as this Nature story explains, detecting early-stage cancers turns out to be much harder in practice, because of the tiny quantities of biomarkers present in a sample. Across multiple clinical trials, these tests are quite good at ruling cancer out: They correctly identify people without the disease in roughly 96 to 99.5 percent of cases. The challenge comes when they try to identify people who do have cancer, with studies suggesting the tests correctly detect them only about 30 to 80 percent of the time. And positive results can be misleading: In one recent trial of Galleri, only about half of the people who received a positive test result were ultimately diagnosed with cancer. For now, that means the technology may be better suited to narrower applications — such as monitoring patients for residual disease after treatment — rather than serving as a broad public health tool.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.