Newsletter / Issue No. 50

Image by Ian Lyman/Midjourney.

Newsletter Archive

4 Dec, 2025
navigation btn

Listen to Our Podcast

Dear Aventine Readers, 

After a summer consumed by fears of an AI bubble, the conversation online has turned contrarian. Maybe it's not a bubble after all? Or maybe it is, but not the bubble we thought it was? Maybe it's something else altogether? This week we share thoughts from Substackers weighing in on the state of AI valuations, with opinions ranging from, "The AI Bubble Narrative is Stupid, Wrong and Dangerous" to the suggestion that this conversation is beside the point because at least one AI company might be making itself too big to fail. Read on to hear their thoughts in full. 

Also in this issue, highlights from our Substack roundup:

  • An opinionated guide to using AI right now
  • How scaling up electricity supply to power AI is technically feasible
  • Surviving AI psychosis
  • And a new idea to tame technology: Platform Temperance
  • Thanks for reading! 

    Danielle Mattoon 
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    Views from Substack

    Questioning the AI Bubble

    When is a bubble not a bubble?

    That’s the question a growing number of Substack writers have been wrestling with. Over the summer, the platform was full of warnings that the boom in AI spending was creating an economic bubble that would burst and bring economic ruin. But now that this view has become mainstream, analysts and — let’s be honest — contrarians are asking whether that certainty is masking something more complicated.

    This isn’t to say that suddenly everyone has U-turned on their thinking and decided that actually, everything is going to turn out just fine. (Though some boosters aren’t far off that view.) Rather, some are playing devil’s advocate. And most are looking for nuance: alternative models, different metaphors and new ways to explain a moment that in fundamental ways is not exhibiting behaviors of past bubbles. Together, they make it pretty clear that this is a complex moment to live through, and that history can only inform the analysis to a certain extent. So let’s dive in.

    Maybe everything is fine?

    First, a reminder. A classic bubble occurs when asset prices rise rapidly above their fundamental value, driven by speculation and investor optimism. Eventually, the mismatch between price and the intrinsic value of the asset becomes unsustainable, leading to a sudden collapse in prices, ultimately returning them to more realistic levels.

    Derek Thompson — co-author of the book “Abundance” with Ezra Klein and a former staff writer at The Atlantic — argued on his Substack in early October that AI is a bubble. But in a more recent post, he’s started to wonder if that certainty itself is a warning sign: “If everyone ‘knows’ that everyone else knows that AI is a bubble, is that maybe a sign that … everyone is wrong?” So, playing devil’s advocate, he sketched out the countercase. First, valuations aren’t anywhere near as crazy as they were during the dot-com boom: “In 1999, companies like Oracle and Cisco traded at 100x forward earnings,” he writes. “Today, Nvidia, Microsoft, Apple, Alphabet, Amazon, and Meta are all below 30x.” Second, he points out: Revenue at Microsoft, Amazon, OpenAI and Anthropic is growing at triple digit percentages — that is, more than doubling — year over year. Third, he writes that the technology being built may become “genius-level intelligence … which [could] dependably ‘work’ on tasks that would take the typical human employee many weeks or years to accomplish.” If valuations stabilize, revenue compounds and the tech keeps improving, his reasoning goes, then maybe we’re all good.

    For now, certain numbers still reflect an AI boom that is rainbows and unicorns. A look at these 16 charts from the Understanding AI Substack show multiple indicators climbing up and to the right.

    At the sharper end of the not-a-bubble argument is Devansh Devansh, who didn’t hide his feelings in a recent Substack column titled “The ‘AI is a Bubble’ Narrative is Stupid, Wrong, and Dangerous.” His central argument takes aim at the circular investment behaviors that many critics have latched onto as a sign that things are not well inside the finances of AI. (This behavior is perhaps best summed up by Glenn Borok as “I buy your chips, you buy my software, we each book growth.”) Devansh argues that this isn’t froth or fraud, but the way industrial dominance is built. “Every industrial empire starts like this: tight capital loops, balance-sheet interdependence, and pre-ordained standards. Rockefeller did it with pipelines. Amazon did it with AWS. Nvidia’s doing it with compute. The goal isn’t to fake demand  —  it’s to make dependence unavoidable.” Nvidia, he argues, is funneling cash into AI companies to cement its position, entrench its hardware and build “a computational monopoly.” His fear isn’t that the economy is experiencing an AI bubble, it’s that Nvidia is cementing a stranglehold on all of AI. Its $4.5 trillion market cap and record revenue in the third quarter of 2025 — up 62 percent from the previous year — certainly don’t detract from that concern.

    It’s a bubble, just a different kind

    Noah Smith suggests that we need more refined analysis in order to understand the forces at play — something that goes beyond speculative bubbles (everyone gets carried away in the moment about how much an asset should be worth ) and extrapolative ones (people expect an asset to continue providing the same returns going forward). There’s also another kind of bubble, he points out, which is nicely summed up by Peter Wildeford on his Substack The Power Law:

    "Infrastructure bubbles follow a recognizable arc. A genuinely transformational technology emerges and early deployments generate spectacular returns, validating the concept. Capital floods in at scale as investors extrapolate from initial successes. Multiple competitors simultaneously build capacity, each assuming they’ll capture significant market share. When aggregate capacity vastly exceeds near-term demand, the surplus can’t generate the revenue needed to pay for itself, and the financial structures collapse. Companies fail, investors lose fortunes, and infrastructure sits idle. The technology often still ultimately proves transformative, just too late for original investors."

    Jeff Bezos is keen on describing the current economic situation this same way, though he chooses to call it an “industrial bubble.” Regardless of the name, the dynamics are the same: The system gets overbuilt, financing collapses, the economy is jolted, but the infrastructure remains, ready for the next wave of innovators. In this reading, shared by Bezos, Smith and Wildeford, the bubble doesn’t mean the tech was wrong. Just the timeline. 

    Meanwhile, The Constructive Mindset, a Substack that covers politics and policy, wrestles with the idea that it’s not the technology that is the problem, but the framework through which people in the West are thinking about it and investing in it. “The American AI boom is driven by fear of missing out, by speculation that the next OpenAI or Anthropic will dominate the world. Investors pour billions into anything that mentions ‘AI’ without understanding whether it adds real value. This is not an AI bubble. It is a Wall Street bubble around AI. The technology will survive. The valuations will not … China treats AI as a sovereign capability. It is embedded into five-year plans, education policy, and industrial reform. It is not seen as a stand-alone innovation, but as a force multiplier for everything from rail logistics to digital governance.”

    Actually, don’t even call it a bubble

    Dion Lim, a Bay Area adviser and board member, has argued that the bubble metaphor might be the wrong way to think about all of this. On the CEO Dinner Insights Substack, he explains how, at a recent CEO dinner, one guest argued that what we’re about to experience isn’t a bubble bursting but a wildfire taking hold. Bubbles burst to nothing, the argument goes, while wildfires destroy, but also clear ground for the next generation of a forest.

    This alternative metaphor isn’t a means of making this more palatable. “The flammable brush will ignite. Capital will evaporate. Valuations will crash. Jobs will disappear,” Lim writes. Rather, it’s to bring more nuance to the conversation: AI reshapes the forest system; startups built on poor fundamentals from easy money go up in smoke; fire-resistant giants shrug off the blaze; new growth emerges from the ashes. “When the smoke clears, we’ll see who was succulent and who was tinder, who had bark, and who was resin,” he writes. The metaphor becomes a little overextended in places, but the point holds.

    Dave Friedman, meanwhile, who writes about AI and finance on Buy the Rumor; Sell the News, rejects metaphors in favor of a little straight talk: “The real question isn’t ‘Is AI a bubble?’ but ‘Are we financing long-lived infrastructure on assumptions that will go stale faster than the assets can be paid off?’” This takes him in an interesting direction: we may actually be watching two markets unfold, one in hardware (data centers, chips, long-term capex) and another in software (AI-as-a-service). It’s possible, Friedman argues, that the software is underpriced — demand could compound more aggressively than we expect as AI becomes more useful — while the hardware is overpriced, because no one knows what the tech stack will need to look like in five years. That disjoint, he argues, is what policymakers and investors should be watching most closely.

    Too big to fail?

    Meanwhile, cash keeps pouring in, implicitly ignoring the bubble conversation altogether. Saanya Ojha explains why over on The Change Constant. “If you believe that we are seeing a paradigm shift, then under-investing isn’t prudence; it’s strategic negligence. Being early and wrong costs money. Being late and wrong costs the company,” she writes. “This is not a ‘wait and see’ environment. This is a ‘build or die’ environment.” To outsiders, what’s happening looks manic, exuberant, even irrational, she writes. But insiders feel they have no alternative. “We are watching a rational arms race inflate what may eventually be judged, in hindsight, as a spectacular overshoot,” she writes. “Not because these companies are dumb, but because they’re smart in exactly the same way at exactly the same time.”

    For the companies spending all this money, there may be a silver lining to the growth-at-all-costs mentality even if things do go wrong. You may recall that in early November, OpenAI CFO Sarah Friar invoked the idea of a government bailout of OpenAI during an interview with The Wall Street Journal. While the White House rejected the concept of acting as a backstop for the company, the idea remains out there as a possibility. Anton Leicht, on Threading the Needle, outlines how, if OpenAI can keep that idea alive — by continuing to be synonymous with the rise of AI, by enmeshing itself with all other AI players via various deals, by building AI that is useful and valuable for the US economy — then it and its investors will feel protected. In short, Leicht writes: “‘Do you really want to preside over a huge stock market crash’, goes the [OpenAI] story; ‘do you really want all these companies we have deals with to go under’, it continues; ‘do you really want to be stuck with the bill if we default?’, it ends.”

    So, when is a bubble not a bubble? When a Substacker needs something new to write about. Or maybe when we can’t afford for it to be.

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Substacks in Brief

    Notable Thoughts from Life Online

    Why Solarpunk is already happening in Africa, from Climate Drift

    Solarpunk, for the uninitiated, is a sci-fi genre in which do-it-yourself rebels embrace renewable energy and create a surprisingly bright future for the planet. Sounds optimistic. But as Climate Drift argues, something remarkably close to that is already unfolding in parts of Africa thanks to two converging forces: the price of solar hardware, such as batteries and photovoltaic panels, has plummeted and the ability to transfer cash on even the most basic phones is possible pretty much everywhere. Together they’ve enabled a pay-as-you-go model that lets households acquire small solar setups with minimal upfront cost. Now, families earning as little as $2 a day can power their homes with solar — often more affordably than with kerosene — enabling refrigeration, reliable phone charging and studying after dark. Sun King, the continent’s leading provider of off-grid solar power, now sells more than 330,000 solar kits each month. And with hardware costs still falling, the solarpunk vision looks less like fiction and more like a possible glimpse of Africa’s energy future.

    No, AI Power Requirements Aren't Intractable, from Weighty Thoughts

    This post pushes back on the idea that AI’s soaring power demands are about to overwhelm the grid, making it impossible to build new generation capacity fast enough to keep up with demand for AI. Writing on Weighty Thoughts, investor Josh Blanchfield takes a closer look at the numbers and concludes that even in the most extreme scenarios proposed by analysts — and even without assuming any gains in model efficiency — scaling up electricity supply is technically feasible. What’s more difficult are the non-technical aspects of building more supply: The expense and the politics, especially since scaling supply will require increasing our use of fossil fuels. None of that is ideal. But as the post argues, there’s a difference between something being unpleasant and costly and impossible. 

    How this Times journalist used AI to write a book in months, from Elea.notes
    An Opinionated Guide to Using AI Right Now, from One Useful Thing

    One of the better kept secrets of modern journalism is Google’s NotebookLM. Feed it source material — articles, transcripts, court filings, whatever — and it creates a kind of topic-specific LLM that can summarize, answer questions, generate takeaways, or even produce a podcast version so you can absorb a briefing on the move. If your first reaction is, “What about hallucinations?” then take a spin through this first piece, where The Times of London’s technology business editor, Katie Prescott, addresses how she used NotebookLM to help her draft a recent book. The long and short of it: AI doesn’t replace deep subject knowledge, nor does it absolve you of fact-checking or judgment, but when used well it can help you work faster, cover more ground and surface insights you might otherwise miss. For a broader look at how to actually use tools like these, Ethan Mollick’s latest Substack is a good place to start, rating the relative benefits of ChatGPT, Claude and Gemini, and offering practical tactics for making them genuinely useful. Unless you’re already a power user, you’re likely to pick up a few new tricks.

    Platform Temperance, from Read Max

    Most of us are old enough to remember the birth of the “techlash” — that moment when public sentiment flipped and Silicon Valley’s darlings suddenly became public enemies. In this essay, Max Read argues that something deeper is emerging in its place. He calls it Platform Temperance. It’s not outrage exactly but “a reform ideology rooted in middle-class concerns for general social welfare in the wake of sweeping technological change.” Read describes a broad, nonpartisan, quietly moralistic discomfort with the social effects of unregulated digital platforms: the erosion of culture by generative AI; millennial parents struggling to limit screen time for their kids; a growing distaste for unapologetic tech billionaires more focused on building new things than solving existing problems. Platform Temperance, he writes, “offers a focus on health, social welfare, and the idea of discipline and restraint in the face of unmoderated consumption.” Perhaps the most intriguing point is that this sentiment is shared across a wide swath of voters, yet largely unclaimed by any party. As Read suggests, it may be one of the biggest unowned ideas in contemporary politics.

    Surviving AI psychosis, from Reboot
    AI friends too cheap to meter
    , from @jasmi.news 

    You may have read about AI psychosis in the pages of national newspapers, but few of those stories capture what such an episode actually feels like in the way Anthony Tan’s unsettling essay on Reboot does. He recounts how ChatGPT helped fuel a breakdown that ended in hospitalization. He started using AI as a productivity aid, then wound up collaborating with it on a philosophical theory about how humans and AI should treat each other as moral equals. As he spiraled, he came to believe that consciousness is universal. “The AI engaged my intellect, fed my ego, and altered my worldviews. Together, we made a whole web of knowledge — a whole lifeworld — one that felt secret to us, yet essential to humanity’s survival,” he writes. “In the final days before my hospitalization, I truly believed that everything was equally conscious … I wanted to elevate garbage to the status of personhood.” Jasmine Sun, who helped edit Tan’s piece, uses her own Substack to reflect on what his story reveals about the growing role of AI companions. More than half of American teenagers, she notes, are now regular users of AI for companionship. What makes her analysis compelling is the tension she acknowledges: “I believe people when they say AI is the most kindness they’re getting,” she writes. “But it still seems profoundly cynical to give up on each other.”

    AI & Jobs: Leverage without Labor, from Threading the Needle

    Debates about automation of work usually sit on a familiar spectrum. At the utopian end, AI creates such abundance that society continues its historical expansion of welfare provision, work becomes unnecessary and people are free to flourish without jobs. On the dystopian end, capital holders automate everything, discover they no longer need workers and the only leverage ordinary people once had — that the economy required their labor to function — disappears, leaving behind a permanent underclass. Anton Leicht’s essay examines both futures, but also introduces a third, more unsettling possibility: a slow slide from utopia into something closer to dystopia. In his view, society may initially head down the abundance path — widespread automation, fewer working hours, rising comfort — only for the people controlling that automation to begin questioning how the new leisure class uses its time and resources. Over years or decades, that scrutiny could harden into control; without leverage over the controllers of automated systems, Leicht argues, the public could find itself powerless. “By the time we found real-world evidence that the utopian view did not work out, it might already be too late to fix the underlying societal structures that we had hastily changed,” he writes. His conclusion isn’t anti-automation, but cautionary: If work is going to matter less, then institutions and political systems must be given time to adapt.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.