Newsletter / Issue No. 51

Image by Ian Lyman/Midjourney.

Newsletter Archive

11 Dec, 2025
navigation btn

Listen to Our Podcast

Dear Aventine Readers,

Autonomous vehicles, autopilot on planes, the blinking sensors on washing machines: Automation has become an inescapable part of modern life. There are massive benefits, of course. In many ways, automation has made life much easier. But there are threats as well, as machines play a larger role in society’s critical functions.

We ask experts from the field of human factors research to assess some of the benefits and dangers of increased automation. How can we prevent accidents in complex systems? And who gets blamed when those failures occur?

Also in this week’s newsletter: 

  • Google’s Gemini 3 makes huge gains in the AI race 
  • New devices can now read our preconscious thoughts 
  • Lab-grown fat makes lab-grown meat taste better 
  • And can China produce the next blockbuster drug?
  • Thanks for reading! 

    Danielle Mattoon 
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    Five Ways to Think About…

    Can We Automate Machines Without Giving Up Control?

    Perhaps no relationship has changed more in the 21st century than the one between people and machines. The internet revolution allowed smartphones to take care of everyday tasks like communication, banking and shopping. Then e-commerce led to a huge increase in the use of robots in warehouses and factories, and smart technologies using sensors embedded themselves in common items like cars, doorbells and household appliances. 

    Now AI adoption is turbocharging that process, redefining work in offices, research labs and factories, on the roads and in the sky. (A report from McKinsey, released last month, estimated that currently proven technologies could, in theory, automate the activities of 57 percent of all work hours in the United States.)

    As automation increasingly takes the lead in critical systems like transportation, medicine and heavy machinery, the risks of catastrophic error evolve. Tesla, for example, is being investigated by the National Highway Traffic Safety Administration over reports that the doors on some of its models become inoperable after a battery failure or crash knocks out the electronic system that powers the handles. There are manual overrides, but they are hard to access — at the bottom of a compartment below the door handle — and difficult to engage. The company is also facing several lawsuits from the families of people who died after being trapped in Teslas after crashes.

    Missy Cummings, a former United States Navy pilot and the current director of the Mason Autonomy and Robotics Center at George Mason University, called the Tesla doors a classic case of poor human systems engineering. “Of course people wouldn’t know where to find the manual controls. You need transparency,” she said. “There’s something called a ‘principle of proximity,’ which says that the control for something needs to be near the thing that it’s controlling.”

    A more complex case involved the two fatal crashes of Boeing Max 737 planes in 2018 and 2019, which killed a total of 346 people. Initially, Boeing blamed pilot error but investigations revealed a multitude of problems including a faulty angle-of-attack sensor, which inaccurately assessed the plane’s position and had no backup.

    Boeing had changed the system’s control software but wanted to avoid the cost of extensively retraining pilots. Once in the air, pilots of the two doomed planes didn’t understand why the nose was being forced downwards and couldn’t right the plane. 

    “Airplanes usually have four-times redundancy. But in the particular case of the 737, they had only one angle-of-attack sensor. And the reading was wrong,” said Joseph Katz, an aerospace engineer who has studied plane crashes for more than 30 years. “If the sensor decided the airplane is close to stalling, it forced the front of the plane to go down … The pilots didn’t know what was happening,” Katz said. “They tried to steer it, and it wouldn't let them. Poor guys.”

    Cummings and Katz belong to an ever more important interdisciplinary field called human factors, comprising engineers, systems designers and psychologists charged with creating and overseeing the environments where humans and machines interact. We asked them and three other experts in the field to discuss the increasing role that automation plays in our society and to offer some guidelines on how we can safely and efficiently integrate new technologies into our work and lives.

    Our field has evolved to the point where we have lots of understanding about the issues raised by automation and interfaces. As an example, let’s think about how information is presented. Automobiles now have these bigger and bigger touchscreens. So people get distracted because it’s so hard to find anything on the screen. We should really study these to predict what might happen, not just design it and see what happens.

    It’s a good example of where we don't see people really thinking through the science. You wouldn’t do that in any other field of engineering. You wouldn’t just guess about the strength of materials. You wouldn’t just put something out there and stress it to see what happens.

    I interact with a lot of people in motor vehicle manufacturing who are really professional. But then I see occasions where people are put into human factors positions and they have no education or training at all. They’re more on the engineering side. 

    What makes some of our work challenging is that for some of the problems you need to know about anatomy and physiology; some of the problems you need to know about driving. You need to know about civil engineering and maybe control theory [the branch of engineering that studies how feedback affects a dynamic system]. You need electrical engineering or mechanical engineering. And you need to understand how people behave. That comes from psychology.”
    Paul A. Green, research scientist at the University of Michigan Transportation Research Institute 

    One of the huge problems with automation is boredom. That’s why those Northwest pilots flew for 45 minutes past Minneapolis.

    I’m interested in creating environments in which the human maintains vigilance. There’s a reason that lifeguards at a pool will rotate positions every 15 minutes or so. It helps them keep their attention. [For autonomous vehicles,] I’d like to see people have to maintain lateral control. If they have to steer the car, that will keep them engaged.

    I don’t think that highly automated systems work under conditions of uncertainty. People need to remember that AI doesn’t think. It doesn’t anticipate. It doesn’t imagine. It’s just guessing the next thing based on patterns. It’s linear algebra on steroids.”
    Missy Cummings, former United States Navy pilot, former senior adviser at the National Highway Traffic Safety Administration and director of the Mason Autonomy and Robotics Center at George Mason University 

    There are not as many high-automation systems in health care as elsewhere but there are certainly systems such as infusion pumps, radiation therapy machines and vital-sign sensors that can fail. Many failures involve unusual circumstances, missing or erroneous data input, and cases where the system does not realize or alert users that an error has occurred.

    [People tend to get blamed for system failures.] We usually have better information about people than the systems they work in. Humans are salient and can usually be connected to any outcome, good or bad. Systems and situations are less visible, less familiar, and can be complex to understand. Blaming people is therefore cognitively efficient and, in the absence of disconfirming evidence, could be seen as a reasonable starting point. The problem is, as new information is gathered, we do not greatly adjust our initial assessments.

    However, systems thinking has spread in many industries over the years as have systems approaches to accident investigation. Although the initial reaction is still to blame or attribute causality to people rather than systems, the situation has improved, especially as systems have become more complex and automated.”
    Richard J. Holden, professor of health and wellness design at the Indiana University School of Medicine, and author of the paper “People or Systems? To blame is human. The fix is to engineer.”

    The big change in the last 10 to 15 years is control mechanisms. We can control almost anything pretty reliably. My students can build drones out of available parts in one semester that take off, visually identify something, deliver a package and return. Twenty years ago, only NASA could do that.

    I think there are transition periods when technology is getting in and then it matures and people see all the little loopholes first, then fix them. Going back to World War II, everything was kind of mechanical, so the pilot felt the inputs. 

    Then they went to power-assist. You can look at the cars with the power steering. The trend was initially to keep the mechanical and have some boosters to help with the physical aspects of the control. Then electronic controls came along so the trend would be to disconnect everything, put in some electric motors and the computer will figure out what the driver wants to do.

    The car will be much better in terms of handling. But the risk — and this is what happened in the Boeing case — if you put the driver or the pilot there and something goes wrong, he cannot figure out what is happening. You augment capability but at the same time, the feeling — the feedback, we call it — is lost. 

    If you leave everything to the computer, there is no way to override what's happening. While trying to ease on the controls for the drivers, things are made automatic. But people think the way they think and sometimes they make mistakes.

    I ask my students to do certain design projects with artificial intelligence. It's not there yet. But I can see the mistakes and I can easily say, ‘Okay, if you guys want to fix it, here’s what you need to do with your algorithm.’

    The people who do AI now are more programmers than engineers. But they are probably going to get more engineers so they could do system design.”
    Joseph Katz, aerospace engineer at San Diego State University 

     One of the things that I focused on throughout my career is the problem with time that I find fascinating. Human beings have an assumption that time is as they experience it, something that is what I call ‘privileged’ in that the clock has to take one second per second and everybody should follow that.

    But the systems we’re talking about will be going many orders of magnitude faster than that. So there’s a sort of dissonance, a time dissonance disparity between us and the system that we're working with. It will have already done something and then moved on for many millions, if not trillions of cycles by the time we actually see the result of what it's done. So we will temporally  be behind the eight ball because the system will have already accomplished that task and all that we'll have done is look at what it did and maybe pick up the pieces. By the time that happens, it will have gone onto the next thing.

    A lot depends on how you design that system … whether you design it with certain inherent delays and pauses that allow a human being to come in at some juncture and make some meaningful decision. That’s difficult because the market wants it to go faster and faster because that means more efficiency and there's greater profit in efficiency. So there's a very strong impetus in the market system to push it to go as fast as possible. There's another impulse on the human side to make it understandable by the human being and still have meaningful human input. Right now we are engaged in that battle.

    The tension between the two causes stress and workload problems because obviously you don't know what it's doing. You see in some transcripts of aircraft where people are flying very advanced aircraft and you can actually hear pilots saying ‘Well, what's it doing?’ 

    One of the most interesting dimensions of AI systems is that they change on the fly. They adapt and make changes at the same time so, in fact, we are introducing a species that adapts to its environment. We haven't really done that before. The real question is will we ever be able to keep up?”
    Peter Hancock, professor of psychology at the University of Central Florida 

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Quantum Leaps

    Advances That Matter

    Google’s new Gemini 3 sets a new standard for AI. Early reviews of Google’s Gemini 3 suggest it has leapfrogged its rivals. Across major benchmarks — academic reasoning in Humanity’s Last Exam, visual reasoning in ARC-AGI-2, scientific knowledge in GPQA Diamond and others — Gemini 3 scores significantly higher than GPT-5. In some cases, it performs more than twice as well. Early user feedback mirrors that. One Reddit thread is titled: “Gemini 3 is what GPT-5 should have been. It's mind-blowingly good.” Salesforce CEO Marc Benioff wrote on X: “Holy shit. I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane.” The reaction inside OpenAI appears to be intense. The Information and The Financial Times reported that CEO Sam Altman has declared an internal “code red,” shifting resources back toward the core ChatGPT product to counter advances by Google and Anthropic, whose new Opus 4.5 model also beats GPT-5 on several benchmarks. OpenAI is still the dominant name in AI, virtually synonymous with the technology. But Gemini 3 is a reminder that the moat OpenAI has built may be easier to cross than many assumed.

    A new way to study tissue could help cure poorly understood diseases. A powerful set of emerging lab techniques is giving researchers an unprecedented look inside diseased human tissue and could accelerate precision treatments for often fatal illnesses, New Scientist reports. Known collectively as spatial multiomics, these approaches combine AI-assisted high-resolution imaging, laser dissection of tissue and ultrasensitive molecular profiling to map exactly what individual cells are doing in their precise locations within an organ. (One mind-boggling example of how precise the tools are: One device can detect differences in molecular weight “equivalent to the weight of a jumbo jet versus a jumbo jet with a fly sitting on it.”) By revealing how genes, proteins and signaling pathways behave across a three-dimensional structure of both healthy and malfunctioning cells, the techniques are already proving useful. Researchers at the University of Copenhagen have used them to spot early molecular changes in pancreatic and ovarian cancers, helping identify new drug targets and find biomarkers for earlier detection. Scientists at the Max Planck Institute used spatial multiomics to analyze skin samples from people with toxic epidermal necrolysis — a rare but often fatal drug reaction that causes patients to lose their skin — and pinpointed a specific immune response at the root of the condition. An early study suggests an existing class of drugs can halt the reaction. For now, this technology is expensive: Analysing hundreds of samples can cost millions of dollars. But major hospitals, such as the Mayo Clinic, are building facilities to use the technology, and a cluster of startups is emerging to commercialize it. That means what is now a niche research tool could soon evolve into a powerful new form of diagnosis.

    Lab grown fats are making tastier fake meat. Many committed carnivores will tell you that fat is where the flavor lives. A startup called Mission Barns is leaning into that idea by cultivating pork fat in bioreactors to create meatballs, bacon and salami that don’t require any animals to be slaughtered, Grist reports. The company takes a small sample of fat from a live pig, feeds the cells nutrients like carbohydrates, amino acids and vitamins, and grows them on sponge-like scaffolds before harvesting the resulting fat. Mission Barns then blends that cultivated fat with plant-based proteins — pea protein for meatballs, wheat for bacon, fava beans for salami — to produce fake meats with what it hopes is a richer, more familiar flavor. Matt Simon, the reporter who tasted them, writes: “My mouth thinks I’m eating a real pork meatball, but my brain knows that it’s fundamentally different.” So far, the company has focused on pork fat — arguably the most flavorful of the animal fats — but in theory the same approach could be applied to beef, chicken, duck, or whatever other animal you fancy. Getting your hands on the products is difficult for now: Mission Barns briefly sold them through an Italian restaurant near Golden Gate Park and a single grocery store in Berkeley ($13.99 for a pack of eight, if you’re asking). As with all alternative proteins, the challenge is finding consumers: vegetarians curious to branch out or meat-eaters looking to cut their environmental impact? Mission Barns is betting that if the flavor from fats gets good enough even skeptics will eventually give in.

    Long Reads

    Magazine and Journal Articles Worth Your Time

    Mind-reading devices can now predict preconscious thoughts: is it time to worry? from Nature
    2,500 words, or about 10 minutes

    For as long as brain-computer interfaces have existed, so have fears that they might — without permission — expose a person’s innermost thoughts. We’re now inching closer to a world in which that seems plausible. In recent years, several patients have received BCI implants in the posterior parietal cortex, a part of the brain involved in reasoning, attention and planning. Studies show that signals recorded there can reveal a person’s intentions milliseconds before they become conscious — such as the decision to play a particular note on a piano. In other experiments, researchers have decoded fragments of internal dialogue from two volunteers, though with very limited vocabulary. At the same time, rapid advances in AI are making it far easier to decode brain signals, and some scientists believe that noninvasive consumer devices could eventually capture similar information from outside the skull. Governments and regulators are beginning to take notice. Unesco issued its first global recommendations on the ethics of neurotechnology this year, and in September three US senators introduced a bill directing the Federal Trade Commission to determine how neurotechnology data should be protected. This piece takes a careful look at how these technologies, and the ethical concerns around them, are suddenly crashing together. 

    Will the next blockbuster drug come from China? from The Financial Times
    2,500 words, or about 10 minutes

    Earlier this year, we described how China had evolved from being a producer of copycat generic drugs into a major force in drug development. Increasingly, the question is no longer whether China will make new medicines, but whether it will make world-leading ones. This Financial Times story suggests that, if not an inevitability, such a future is at the very least highly likely. China’s biggest advantage is perhaps its blistering speed. Its biotech firms can move drugs into testing two to three times faster than in the West, helped by a huge and highly skilled workforce, less burdensome regulation and a population keen to participate in clinical trials. But it’s not a home run just yet: Some of the infrastructure needed to support true global blockbusters — international regulatory expertise, multinational trial capabilities and worldwide commercial operations — is still developing. For now, many Chinese biotech firms are partnering with Western companies to bring cutting-edge discoveries to market. But this is likely a transitional phase. As China builds out the machinery for global trials and commercialization, it may only be a matter of time before the next blockbuster drug carries a Chinese label.

    The Progress Paradox, from Noema
    4,500 words, or about 18 minutes

    The most common trope about innovation goes like this: Scrappy startups and ruthless competition drive technological progress, while monopolies lumber on, mired in red tape and incapable of real innovation. This essay argues that this is, for the most part, an American myth. Matt Prewitt, a writer and former antitrust lawyer, contends that the real engine of innovation has historically been monopoly power — or the pursuit of it — operating alongside state-managed markets shaped by tools like direct public investment and strong intellectual property rights. One example: when Bell’s early telecommunications patents expired in 1894, competition surged — and technological progress slowed. Things only improved when AT&T began buying up smaller operators, gaining an effective monopoly over long-distance lines. Once courts effectively blessed that monopoly, AT&T went all-in and built the first coast-to-coast network. Prewitt sees similar dynamics in aerospace, social networks and even AI. From here, he argues, we face several paths: tolerate monopolies and the innovation they sometimes bring; accept slower technological progress; or build a more muscular state that deliberately steers innovation toward the public good, with new rules around IP, data, antitrust and even different models of ownership. These ideas challenge conventional wisdom about what an innovation economy should look like, but they’re well worth exploring.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.