Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
We've written a few times about AI agents — their early days, how businesses should use them and, more recently, what the viral agent OpenClaw can do. Today we're looking at what an agent-friendly internet might look like and why the web is being rebuilt to make that happen.
The internet was built with code. AI, on the other hand, communicates largely through language. In order for agents to do what they promise, the two systems need to be able to talk to each other. Some of the biggest players in AI are building infrastructure to allow that to happen. Much of this work is being done with the expectation that these new standards will be universal, like highways anyone can use. But there will be exceptions. Systems could be built — like gated neighborhoods — that favor some relationships over others, creating walled gardens of access. Either way, the internet as we know it will be refashioned by an underlying architecture that is almost wholly new. Read on to learn more about what this looks like in practice, and watch this space for future developments.
Also this week:
Thanks for reading!
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
AI Requires a New Internet. Here’s How It’s Being Built.
In 1999, Tim Berners-Lee, the architect of the World Wide Web, laid out a vision for the future of his invention. "I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web — the content, links, and transactions between people and computers," he wrote in his book, “Weaving the Web.” “The day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines.”
Almost three decades later, we are getting our first look at what that might look like in the form of AI agents — autonomous software systems powered by large language models that can reason, make decisions and act on our behalf, often by using other tools, databases and agents.
And while you may not be knowingly using an agent, they are increasingly working in the background of our day-to-day transactions. According to Salesforce, agents were involved in 20 percent of global online retail sales over the recent holiday shopping season; Adobe reports that the use of such systems increased by 693 percent from 2024 to 2025.
But AI agents promise to change more than the act of shopping. Already, people are experimenting with open source agents to undertake research projects at work, analyze bank statements to facilitate tax filing and coordinate travel plans on crowded calendars. Soon, they could be baked into systems that anyone can make use of on a daily basis. Getting to that place, however, will require solving some difficult technical challenges: How will AI agents find what they need to get a job done? How will they communicate with one another to complete a complex task? How will they obtain and share sensitive data like calendar access or payment details? And how will they do all of this safely, without creating serious cybersecurity risks?
"At a very high level, it's [about] taking these ... artificial intelligence systems and trying to make them useful, which means effectively setting them loose on real data and asking them to do real, economically valuable tasks," said Mark Collier, general manager of AI and infrastructure at the Linux Foundation, a nonprofit that supports open source software development. That, he added, means that "everything comes back to trust."
Trust obviously won’t come from blind faith; it’s going to need to be built into the system. And the way it’s being built in is through an entirely new set of internet protocols — the rules that define how data is formatted, transferred and used — layered on top of what’s already there. Protocols have been around since the internet was built; any time technology advances in a way that changes the way we use the internet — to make it more secure for commerce, say, or to enable video to stream over the web — we’ve added new ones. What’s required now is a set of protocols that can enable modern AI, which operates using natural language, to interact with an internet built on computer code.
The building of protocols has tended to be a collaborative effort among engineers who work at otherwise competing companies. That’s because a protocol is valuable only if lots of people use it, and competing companies generally all benefit from widespread adoption. Think of email, which is an interoperable system built around a protocol called SMTP, or Simple Mail Transfer Protocol — a system that still allows the likes of Microsoft and Google to compete on providing email software. In that spirit, protocols for AI and AI agents are being built by some of the biggest AI companies, including Anthropic, OpenAI and Google. Together they are building both the fundamental architecture of an internet for agents — let’s call them the highways of the new internet that almost everyone will need to use — as well as protocols that address more specific needs — let’s call them the off-ramps and roundabouts — that some but not everyone will want.
One of the major elements of the emergent internet is likely to be the Model Context Protocol (MCP), introduced by Anthropic in late 2024, which provides a kind of universal language to help agents navigate the internet. Rather than forcing every AI system to figure out how to talk to everything else on the web from first principles, MCP creates a way for agents to “connect into these real world systems,” like software tools and databases, said Collier. MCP lets an AI agent — regardless of whether it was built by OpenAI, Google, Anthropic or another company — ask an airline’s MCP server for flight details in a standardized format. And any agent will be able to understand and act on the response. Organizations can host their own MCP servers, making it possible for agents to access different data or tools. A flight-booking agent could connect to a server to study your availability in a calendar, another to get an airline’s flights schedules, and yet another to check your inbox to read booking confirmations.
For tasks requiring that agents work together, a complementary system known as A2A, originally developed by Google, enables agents to exchange what one might think of as digital calling cards, which allow them to identify what specific agents do and determine which ones should perform what tasks. For example, a trip-planning agent might coordinate with accommodation and car rental agents to pull together a complete vacation.
These sorts of protocols are the "building blocks" of an agentic web, said Jon Stahl, a director of product management at Salesforce focused on commerce. And we are starting to see signs that they are becoming formalized. Both MCP and A2A now sit under the stewardship of the Linux Foundation so that they can be developed as vendor neutral, open source standards. Companies including Amazon Web Services, Microsoft, OpenAI, Cisco and IBM are working on these and other open source agent protocol projects to ensure that “agentic AI evolves transparently and collaboratively,” according to the Linux Foundation website.
Beyond the basics
While MCP and A2A enable foundational, universal behaviors for agents such as communication and coordination, other protocols are emerging to facilitate more specific activities. You can think of these like Legos that build new capabilities on top of the fundamental structure beneath, explained Rao Surapaneni, a vice president and general manager at Google Cloud who leads teams building AI tools for businesses.
One example: agent-to-agent payments. Coinbase’s x402 protocol enables microtransactions between agents so that one agent can pay another to get a task done. Another: the ways humans and agents communicate. The Agent-User Interaction Protocol (AG-UI) enables real-time interactions between humans and agents, allowing a user to check progress or approve steps without derailing the process.
Some of these protocols will go on to be universal, especially if the interoperability they facilitate is seen as beneficial by all the companies building agents. In other cases companies could create a competitive advantage around their agents favoring certain partnerships. In commerce, for example, OpenAI and Google have launched two different protocols that facilitate agents’ ability to make purchases on behalf of a user: Agentic Commerce Protocol (ACP) and Universal Commerce Protocol (UCP) respectively. For now, ACP favors Stripe, while UCP is more vendor agnostic. Neither protocol currently enables buying more than a single item at a time, though both OpenAI and Google plan to create more advanced protocols that will.
Stahl said that the creation of multiple tools to solve the same problem is indicative of what he expects will be a "Cambrian explosion" in protocols that will take place across the AI sector in the coming months and years. Surapaneni said we should expect “more and more verticalized building blocks” to pop up for use across all sorts of sectors: finance, healthcare, law, insurance and so on. Over time though, he predicts, they will not all survive. Instead, they will "winnow and converge as we discover what the right answers are."
Papi Menon, chief product officer at Outshift, a division of Cisco focused on research and innovation, pointed out that if companies end up using in-house standards for some applications, users might find themselves locked into specific ecosystems. The long-term reality may be a hybrid system, where agents can work across some boundaries more easily than others.
What’s Next
An internet for agents will bring new challenges, including — very high on the list — security, because an internet dominated by system-to-system communication introduces new opportunities for bad actors. Menon pointed out that just as scammers can spin up fake websites, bad actors could run rogue MCP servers, which could feed agents misleading information, say, or trick them into performing unauthorized actions. New safeguards will be needed: Some agents might be given specific permissions if they’re dealing with highly sensitive data, said Surapaneni, or specialized filters could detect suspicious behavior and stop what a bot is doing.
Other tools could keep a close eye on the behavior of agents to ensure they’re not accidentally going rogue, said Surapaneni, due to the inherently unpredictable behavior of the large language models that power them. “There's a lot of effort that's going into making sure we can observe, evaluate and validate [their reasoning],” he said.
Menon predicts that the risks associated with agents could mean that agents are more quickly adopted by individuals than by large companies. A single user might be prepared to let an agent make use of their data; someone running a business might think twice before allowing such systems to handle all their customer details. "Now you're talking about compliance, governance, security, all of those things," he said. "There's an increasing realization that it is going to be a much longer timeline for getting the kind of enterprise adoption and scale for agentic applications than we had previously very blithely assumed." He invoked recent comments from OpenAI co-founder Andrej Karpathy, who thinks it might take a decade for companies to fully deploy agents as they butt up against exactly these sorts of problems.
The long-term impact on users of the web, meanwhile, is uncertain. Menon thinks that agents will "fundamentally change not just the web [but] our relationships with machines." Others are more cautious: "I don't think agents are going to replace the web," said Stahl. "I think they're going to extend it, [and] layer on top of it in interesting ways."
What's clearer is that the way these protocols develop will be important. An open agent ecosystem could help democratize access to sophisticated AI capabilities. Fragmentation, however, would mean different companies couldn’t work together, could concentrate power among a few large platforms, determining which businesses could participate in the future of the internet. But openness doesn't preclude the ability to make money: "The inventor of HTTP didn't monetize it, but [the protocol] became a connector for trillions of dollars of industry," said Collier.
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Advances That Matter
AlphaGenome promises to unlock the secrets of genetic disease. Can it? A new AI tool developed by researchers at Google DeepMind claims to predict how genetic mutations contribute to disease, but whether it can do so remains to be seen. Described in Nature, AlphaGenome is trained on vast public databases of human and mouse genetics and can analyze up to one million letters of DNA at a time. From that data, it can generate predictions about which stretches of genetic code are most important for the development of specific tissues, which mutations are most strongly associated with disease, how gene regulation is disrupted by mutations and which cell types are most affected by genetic errors — insights that could, in theory, help researchers better understand disease and identify new drug targets. AlphaGenome invites comparisons with AlphaFold, DeepMind’s protein-structure AI model that transformed structural biology and helped earn its creators a Nobel Prize. But expectations should be tempered. “It’s far from perfect,” said Ben Lehner, head of generative and synthetic genomics at the Wellcome Sanger Institute in the UK. Critics point to several limitations. First, the datasets AlphaGenome is trained on include many genetic signals that look meaningful in theory but still require extensive experimental validation. Second, it remains firmly an academic research tool, not something clinicians can use to scan a patient’s genome and diagnose disease. And third, there is still unease among some researchers about trusting AI systems in genomics at all. As Mark Gerstein, a computational biologist at Yale, put it in an interview with The New York Times, AlphaGenome will likely prove useful — but it is “not going to win the Nobel Prize.”
A pioneering surgery can help cancer survivors have babies. Cancer treatments such as radiation and chemotherapy can permanently damage the reproductive system. But a surgical approach that temporarily relocates a woman’s reproductive organs appears to protect fertility in some patients, giving survivors of pelvic cancers a chance to have children. As MIT Technology Review reports, the procedure — first pioneered by Brazilian gynecologic oncologist Reitan Ribeiro — involves surgically moving the uterus, fallopian tubes, and ovaries from the pelvis to the upper abdomen, close to the ribs. After patients recover from the operation, they undergo cancer treatment. By physically moving the organs out of harm’s way, doctors can shield them from localized therapies used to treat, for example, cancers of the bowel or colon. The first successful procedure was performed in Brazil in 2017. In January, doctors at Sion Hospital in Switzerland announced the birth of Europe’s first child conceived after the surgery. Not every case has been described in scientific journals, but Ribeiro estimates the technique may have been used around 40 times worldwide, including in the US, Peru, Israel, India and Russia. The surgery is complex and not without risk. But for patients whose cancer treatment might otherwise end their chances of pregnancy, this offers hope.
A huge heat battery could help industry curb its emission. It doesn’t look like much. But a new industrial heat battery built by MIT spinout Electrified Thermal Solutions, which roughly resembles a small shipping container, could help reduce the carbon footprint of some of the world’s most emissions-intensive industries, including cement, steel, chemicals and glass. Inside the box is one of the lowest-tech solutions imaginable: stacks of firebricks. By running electric current directly through them, the bricks can be heated to around 3,270°F (1,800°C) and used to store large amounts of energy as heat. As Canary Media reports, the company is now testing its first commercial-scale system at the Southwest Research Institute in San Antonio, Texas. The idea is simple. When renewable energy is abundant and cheap, excess electricity is converted into heat and stored in the bricks. That heat can then be released later as hot gas to power industrial processes, which account for roughly one-fifth of global energy use. While electricity can easily replace fossil fuels for low-temperature applications, the extreme temperatures required for processing minerals and metals are far harder to electrify because conventional wiring fails. Dumping electricity directly into solid materials is a practical solution and Electrified Thermal isn’t alone in pursuing this approach. Other companies are experimenting with crushed rock, molten salt and sand as heat storage media. But only a handful — including Rondo Energy, Antora Energy and Polar Night Energy — have begun testing their systems in industrial settings. Electrified Thermal plans to begin delivering its first commercial units to customer facilities later this year or early 2027.
Magazine and Journal Articles Worth Your Time
The Fight For Slow And Boring Research, from Asterisk
3,600 words, or about 15 minutes
The long-standing social contract under which federal agencies like the NIH and NSF reliably funded basic research is under strain. While Congress appears poised to resist President Trump’s proposed deep cuts to science funding, uncertainty remains around the use of “impoundment” — a mechanism that allows the president to withhold funds from programs that conflict with his priorities, even after Congress has approved them. As a result, an estimated 86 percent of principal investigators are now exploring nonfederal funding sources, from industry partnerships and philanthropies to venture capital. This essay asks what that shift means for the practice of science in the United States. “When a lab relies on a single federal pillar, its only real audience is peer review — study sections, editorial boards, and journals,” Jolie Gan writes. “But the moment a lab pursues multiple funding streams, it signs up for more than one kind of evaluator and more than one way of being judged. In other words, funding becomes a serious incentive for academics to invest in their science communication.” The piece plays that forward: In a more fragmented funding landscape, labs that can clearly articulate the value of their work may thrive, while those that can’t may struggle. Woven through the essay is a quiet manifesto about how researchers can adapt, by reskilling, rethinking how they communicate and by defending the case for curiosity-driven science.
Meet the Vitalists, from MIT Technology Review
6,200, or about 25 minutes
If you thought the longevity movement was already the domain of hyper-optimizing misfits, wait until you meet the Vitalists. According to MIT Technology Review, Vitalism is “longevity for the most hardcore adherents,” a movement built around a simple moral axiom: Death is bad, life is good, and aging should be eradicated as quickly as possible. Vitalists treat this belief less as a preference than as a duty. They have drawn up a declaration that defines what it means to be a good Vitalist, proclaiming that “humanity should apply the necessary resources to reach freedom from aging as soon as possible,” and pledging to actively spread the message “against aging and death.” Achieving that goal, they argue, requires sweeping changes to public policy, culture, finance and medicine — nothing less than a wholesale reorientation of society around the fight against mortality. Founded by Nathan Cheng and Adam Gries, the movement has coalesced around a nonprofit foundation dedicated to “accelerating Vitalism.” The group has apparently been recruiting lobbyists, academics, biotech CEOs, wealthy donors and politicians. It has also been working to help shape US state laws that make unproven and experimental medical treatments easier to access. The story is full of other eye-opening details: secretive research efforts designed to avoid scrutiny, longevity advocates within the US Department of Health and Human Services, and viral marketing techniques borrowed from startup culture to push what was once a fringe idea toward the mainstream.
The music industry’s cautious embrace of AI, from The Financial Times
2,700 words, or about 11 minutes
There’s an artist on Spotify called Sienna Rose. The platform describes her as “a neo-soul singer whose music blends the elegance of classic soul with the vulnerability of modern R&B.” She has over four million monthly listeners. Rival streaming service Deezer, however, offers a different story: It says that “the majority of Sienna Rose’s albums are detected and labelled as AI-generated music using our detection tool.” Not that most listeners would notice. According to Deezer, around 97 percent of listeners can’t reliably distinguish between music created by AI and music made by humans. That reality has left the music industry deeply conflicted: Should it embrace AI as a lucrative new tool, or defend the human creativity that has historically underpinned its business? As this Financial Times piece makes clear, the answer is unlikely to be a simple choice between the two. AI already has many potential roles across the industry: generating entirely new songs, creating hyper-personalized listening experiences, resurrecting voices from the past (Frank Sinatra singing Gangsta’s Paradise, anyone?), or acting as a creative assistant for human artists. Listeners, meanwhile, send mixed signals — many clearly enjoy AI-generated tracks, even as they say they dislike the idea of machines making music. Artists are, perhaps understandably, the most uneasy of all: Established stars worry about their intellectual property being replicated without consent; emerging musicians fear being drowned out by an endless flood of musical slop. Platforms and labels are caught in the middle, balancing commercial benefits against reputational risk. For now, the industry is muddling through, experimenting cautiously while avoiding firm commitments.