Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
Driverless cars have made big progress in the past five years. If you live in certain parts of San Francisco, Los Angeles, Phoenix, Austin or Atlanta you may be used to seeing Waymos scooting around with no one in the driver’s seat. But where are they in Chicago or New York? Boston? In this issue we look into what’s holding driverless cars back and when they might be coming to a city near you.
Plus:
Thanks so much for reading. We will not be publishing in mid-August, so we’ll see you again in September.
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
Where Are All the Driverless Cars?
Tesla’s long-awaited RoboTaxi launch got off to a bumpy start in late June.
In a tightly limited patch of Austin, Texas, where only a small number of carefully selected people have been able to try the company’s new ride service, Tesla’s fully autonomous cars have already racked up a string of worrisome mishaps. One vehicle drifted into a lane of oncoming traffic. Another rolled into a parked car. A third dropped its passenger in the middle of an intersection. There are reports of unexplained speeding and abrupt braking.
None of these incidents put humans in danger. But they serve as a timely reminder: Getting fleets of driverless cars to operate safely on real city streets is incredibly difficult. While Waymo, the front-runner in autonomous driving, has driven more than 71 million autonomous miles across four US cities with only some minor crashes to report (according to the company), more serious accidents have forced competitors to pull back. Uber suspended its self-driving program after a fatal crash in 2018; GM’s Cruise shuttered its robotaxi project in 2023 after one of its vehicles hit a pedestrian and dragged the person 20 feet. Incidents like these erode confidence not just in individual companies, but in the overall technology.
While driverless cars have seen striking technical advances in the past five years, experts agree that there are still significant technical and techno-economic challenges to solve. We asked leading autonomous vehicle researchers and engineers what’s standing between us and a future in which driverless cars can safely operate anywhere.
Edge cases remain a nightmare
The biggest technical challenge facing autonomous vehicles is the long tail of edge or corner cases, said Darcy Bullock, a professor and director of the Joint Transportation Research Program at Purdue University. “Those things as a [human] driver that cause you stress? [That] is really what stresses the autonomous systems,” he said. Bad weather, kids running out into the street, erratic drivers — there are all kinds of tricky or unexpected situations that can catch humans and cars off guard.
Autonomous Vehicles use AI to analyze up to three gigabits of data per second, which allows the car to perceive its surroundings, predict the behavior of other cars and plan a safe path. This works well in predictable environments, but currently the AI in autonomous vehicles can’t generalize well enough to deal with situations that aren’t in its training data or hard coded into its software as rules, said Saber Fallah, a professor and director of the Connected Autonomous Vehicle Research Lab at the University of Surrey in the UK. “We don't have the level of AI to enable cars to make the right decisions,” he said. Companies have compensated for this with safety drivers — either behind the wheel, or supervising remotely — ready to help if things go wrong.
Without human-level AI intelligence that can deal with the ambiguity of roads, solving for edge cases is currently a game of whack-a-mole. Increasingly, companies like Waymo, Tesla and the UK’s Wayve are controlling vehicles using end-to-end neural networks: AI systems trained on troves of examples of human driving to translate sensor data directly into steering and throttle commands. But these systems must be trained on additional data to manage every new edge case, explained Jeff Schneider, a research professor at Carnegie Mellon University’s Robotics Institute and former engineering lead at Uber’s Advanced Technologies Group. The fallback software that many autonomous vehicles use, meanwhile, is often hard coded with rules to help the car behave properly in certain unusual situations, and engineers incrementally adjust those rules to deal with new problems.
Some researchers hope ever larger “vision-language-action” models — giant neural networks modeled after the architecture behind ChatGPT — will give end-to-end AVs the flexibility to reason through almost any situation. But for now, Fallah points out, the bigger these models get, the more computationally demanding they become, and their responses can be too slow to use in real-world settings. He also said that they are prone to giving inconsistent responses to near-identical scenarios.
The economics still don’t work
The sooner AI can handle more ambiguity, the sooner autonomous cars will make commercial sense. “From a business model point of view, the only thing that matters is, when can you pull the safety driver out,” said Schneider.
Waymo, which has offered commercial rides since 2018, is notable for running vehicles without safety drivers in Phoenix, Los Angeles and the San Francisco Bay Area. But even here, every vehicle is monitored by what the company calls “fleet response agents” who are ready to intervene if the car gets confused. And every vehicle operates inside a carefully mapped “geo-fence” — a boundary in which the cars are known to operate well — which is another limit on how commercially viable they can be.
There’s another financial headache, too: the cars themselves. Waymo’s current vehicles are $73,000 Jaguar I-Paces kitted out with as much as $100,000 of hardware, including 29 cameras, five lidar sensors and six radars along with onboard computers. Despite years of R&D, lidar in particular remains stubbornly expensive, as each sensor can cost thousands of dollars. While overengineered luxury cars may be fine for prestige pilot projects, said Mark Fagan, a lecturer in public policy at the Harvard Kennedy School who leads the school’s Autonomous Vehicles Policy Initiative, “the economics don't work like that forever.”
Uber’s president and chief operating officer recently told the Financial Times that launching a commercial-scale robotaxi fleet in a city the size of London or New York requires “billions of dollars” of capital expenditure, plus operating costs to maintain the fleet of vehicles.
Saving money on the hardware is an important part of making these vehicles commercially viable. With this in mind, Waymo’s next generation of cars, based on Chinese-made Geely Zeekr vehicles, use just 13 cameras and four lidar sensors. Tesla, in contrast, has famously avoided lidar and bet on cameras and AI alone — a strategy that keeps costs low and would theoretically let it upgrade millions of existing cars for autonomy via software. That makes commercial sense, but also means that the autonomy challenge Tesla must solve is harder, especially in complex environments. There is no consensus on which will be the best long-term approach.
Lots of cities are leery of AVs
Driving norms, road layouts, and even basic traffic laws vary wildly among cities and each expansion demands thousands of hours of new data, fresh rounds of testing and — perhaps most daunting — permission from local authorities.
Some states, like Arizona and Georgia, have embraced AVs in hopes of becoming industry hubs. Others, including New York and Massachusetts, have yet to allow commercial AVs on their roads, citing concerns over safety and the livelihoods of existing taxi and delivery drivers. Fagan expects this trend to continue, and perhaps even become more polarized, with some cities instigating increasingly rigorous requirements on autonomous driving and others rolling out the red carpet for AVs.
Meanwhile, AV companies have an uphill battle in selling their technology as safe. Even if AVs are statistically safer than human drivers, there’s almost no tolerance for error. “People have zero-risk bias,” said Bullock, referring to the way humans perceive human and machine error differently. Overcoming that resistance might only be achieved by autonomous cars grinding out many millions of miles of incident-free driving until the safety statistics around the technology resemble those of airline travel.
The road ahead
Fears about AVs seem to evaporate quickly when people use the technology: A 2023 survey by JD Power showed that respondents who have ridden in self-driving cars in Phoenix or San Francisco gave the technology a confidence score of 67, while those who haven’t been exposed to them gave it a 37.
In part for this reason, companies are pushing hard to expand their testing locations: Waymo is currently starting to test its vehicles in New York and Boston, while Tesla is already planning to expand its tests to San Francisco.
Yet Tesla’s bumpy start in Austin is a reminder that it is not as straightforward as these companies may have once hoped. A little under a decade ago, many companies promised that fully autonomous cars would be on the streets by 2021. Four years later, fulfilling that promise will still require improvements in AI, smarter cost controls and a friendlier regulatory climate to make that a reality.
“It's going to be a much slower adoption than we ever expected 10 years ago,” said Bullock.
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Advances That Matter
Robotic surgeons are successfully removing organs from pigs. The idea of a robot cutting out your appendix might sound unsettling, but that scenario is getting closer to reality. In a series of experiments, a robotic surgeon developed at Johns Hopkins University in Baltimore autonomously removed gallbladders from eight pig cadavers with a 100 percent success rate. The robot, described in Science Robotics, is based on the da Vinci surgical system, hardware that has been in use since it was approved by the FDA in 2000. But unlike the regular da Vinci system, which is controlled by a human surgeon, this one is autonomous, and uses network architecture similar to that in large language models, adapted to learn from images and video. To train the system, researchers fed it 17 hours of video showing 16,000 individual motions performed by human surgeons on pig cadavers. Two trained AI systems then work together: One analyzes a feed from a surgical camera and delivers step-by-step natural-language instructions; the other converts those instructions into robotic actions. Each surgery involved 17 discrete tasks, from attaching clips to making incisions. The robot was even able to correct its own mistakes — typically six times per operation — without human help. While the robot’s technique was smoother than an expert surgeon’s, it was also much slower. The promise is that robots trained on procedures performed by the world’s best surgeons could someday provide expert care in regions far beyond the geographic reach of today’s specialists. Next up, New Scientist reports, the researchers plan to test their system on live animals, where complications like bleeding and breathing motion will add complexity. If that goes well, the researchers told The Guardian, human trials could begin within the next decade.
California will use AI to manage power outages. The state’s main grid operator will be the first in the US to test how generative AI can automate the time-consuming labor required to keep the lights on. The California Independent System Operator (CAISO), which oversees 80 percent of the state’s grid, is piloting software called Genie, built by the energy tech firm OATI, MIT Technology Review reports. Currently, CAISO engineers search through hundreds of outage reports for keywords that signal ongoing maintenance problems and load that information into software models that predict how those outages might affect electricity flow. Genie uses generative AI to analyze those reports in real time and could eventually operate autonomously, for example proposing grid adjustments when problems arise to keep power flowing efficiently. If the trial is successful, CAISO may expand its use of AI-driven automation to other aspects of the grid, it told MIT Technology Review. The magazine also reports that ERCOT, Texas’s grid operator, is exploring the use of a similar technology. Earlier this year, the International Energy Agency predicted AI could help reduce emissions by improving energy management. Projects like CAISO’s are an early test of that potential.
Adaptive brain implants promise fine-tuned Parkinson’s relief. Deep-brain stimulation (DBS), which uses electrical pulses to regulate abnormal brain activity, has been used for more than 20 years to help people with Parkinson’s disease control symptoms like tremors. But traditional DBS delivers continuous stimulation, which can sometimes cause side effects, including speech problems, involuntary movements or impulse-control issues such as gambling. Now, reports Nature, researchers are developing adaptive systems that adjust stimulation in real time based on brain signals. The technology monitors brain waves in regions controlling movement and automatically tunes the device’s output to match the brain’s changing needs throughout the day. A 68-person trial, whose results are not yet published but whose details have been shared with reporters and both US and European regulators, showed that the adaptive approach reduced symptoms and drug reliance while minimizing side effects. The findings have proven compelling enough for regulators in Europe and the US to approve the technology, and Medtronic, a medical device manufacturer, says that more than 40,000 of its DBS devices can be upgraded to the new adaptive mode through a software update. While more research is needed to confirm the long-term benefits of the approach, experts believe the technique could be extended to conditions like Tourette’s, OCD and even depression — about which there are more details in this recent IEEE Spectrum story — potentially lending a new level of precision to brain therapies.
Magazine and Journal Articles Worthy of Your Time
How Tether became money-launderers’ dream currency, from 1843 Magazine
4,500 words, or about 18 minutes
The stablecoin Tether, a cryptocurrency pegged to the US dollar, is a money-making machine. The company takes dollars from customers, issues digital tokens in return, and invests those real dollars, mostly in US Treasuries, for its own profit. None of the investment gains are returned to users as interest. The result? Staggering profits: In 2024, Tether reportedly made more than $13 billion — more than double that of BlackRock — on a $155 billion asset base, with a staff of just 150. But there’s a dark side to this success. The token is beloved by criminals because transactions made using Tether, unlike Bitcoin, are extremely hard to trace, allowing users to bypass traditional financial safeguards. One operation, for instance, laundered millions in cash from European drug gangs by funneling money to sanctioned Russians and settling the resulting debts using Tether. Authorities have made little progress in stopping this behavior and, as this story explains, Tether’s claims that its scale now supports the US dollar’s global dominance may mean the situation isn’t likely to change any time soon.
Mapping the Unmapped, from Grist
5,600 words, or about 23 minutes
Tools like Google Maps work seamlessly in much of the developed world. But if you’ve ever ventured into remote parts of South America or Southeast Asia, you know that coverage gets patchy fast. Yet accurate maps are crucial in these regions, not for tourists seeking out an iced coffee but to help in deploying resources, responding to disasters and supporting climate resilience. This story profiles the Humanitarian OpenStreetMap Team (HOT), a group tackling the lack of commercial mapping incentives in these regions by filling in the cartographic blanks using OpenStreetMap, a kind of Wikipedia for maps. HOT relies on 340,000 global volunteers who edit and update OpenStreetMap using aerial imagery and local, on-the-ground information. By blending that open-source data with local knowledge, HOT is building detailed maps that are essential for first responders, aid workers and communities confronting climate change.
I let an AI agent run my day, from New Scientist
2,500 words, or about 10 minutes
AI agents promise to automate everything from HR tasks to invoicing, but what’s it actually like to rely on them in daily life? This story is an attempt to answer that question, as the author divvies out tasks relating to his email, finances and even dinner orders to each of two AI agents: Operator, from OpenAI, and Manus, from the Chinese startup Butterfly Effect. The results are mixed. The agents sometimes excel, compiling code or ordering kung pao chicken successfully. They also struggle with what we humans may think of as basic tasks, sending incorrect invoices and writing cringeworthy emails that are sent without human review. The story raises some tough questions. How comfortable are you handing a bot your credit card details so it can order your dinner? And how often should an agent check in with you to ensure it’s doing the right thing before its interruptions outweigh any productivity gains? There’s one bigger catch, too: Both of these full-featured agents currently cost thousands of dollars per year to use. For now, at least, it looks like it will be a while before an agent is running all of your personal life for you.