Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
Just about a month ago, OpenAI released Sora, a new social media platform built on AI-generated video that allows users to create videos of pretty much any idea that crosses their minds. The videos, which are stunningly — perhaps addictingly — realistic, further blur an already faint line between what's real and what's been created by AI. The questions about what might result from the proliferation of technologies like this are endless, and this month we review what various Substack authors have been thinking and writing about it.
Plus, more excellent offerings from Substack over the past month:
Sincerely,
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
Synthetic Video, AI-fueled Social Networks and the Slop Economy
A billionaire tech CEO was seen shoplifting in a branch of Target this month. It prompted thousands of words of existential hand-wringing on Substack — but not a single sentence was about the perpetrator’s penchant for petty theft.
That’s because the video clip of Sam Altman, the CEO of OpenAI, wandering into a store and leaving with an armful of computer hardware, was entirely synthetic. This was a product of Sora-2, OpenAI’s latest video-generation model, and the centerpiece of a new social media platform, Sora, that the company launched alongside it. The social network is like no other: All of the content on the platform is AI-generated video, created from prompts written by its users. People can upload their own likenesses so they can, say, watch themselves play the trumpet in the middle of a herd of zebras.
Altman suggested that the new model could be a spark for a “Cambrian explosion of creativity.” On Substack, meanwhile, the reaction has been closer to a Cambrian explosion of anxiety. Writers across the tech, media, and culture beats have been quick to ask what this new synthetic visual life means — for creativity, copyright and reality itself, as well as for the future of AI.
Before we dig in, let’s be clear: The overwhelming consensus is that the underlying video generation model, Sora-2, is impressive. “In its best moments, Sora had my friends and [me] shaking our heads at the improbably high-quality videos it could generate of us doing almost whatever we asked it to,” wrote Casey Newton on Platformer, a newsletter about social networks that started on Substack before moving to the rival publishing platform Ghost. The results are mind-bendingly realistic, whether it’s clips of pirates smashing laptops, giggling influencers or dancing horses. Zvi Mowshowitz, who writes about AI on Don't Worry About the Vase, was struck by the system’s grasp of physics: Objects move and collide believably, with subtle shadows and inertia. So it’s not surprising that, despite the fact that the invitation-only social network part of Sora is currently only available in Canada and the US, it rocketed to the top of Apple’s App Store charts.
Whose IP is it anyway?
As with seemingly every OpenAI release, questions of data and copyright loom large. On Sources, a Substack about Silicon Valley, Alex Heath observed that OpenAI appears to be taking an “ask for forgiveness, not permission” approach to intellectual property. Testing the app, he tried prompting it to make a video of himself as Superman and was blocked. When he changed the prompt to “flying superhero with a red cape,” it worked fine. That suggests OpenAI has bolted on a thin layer of protections that kick in when a user writes a prompt, while the model itself remains steeped in copyrighted material. It’s easy to find videos that evoke recognizable classic movies and television shows, and in many cases it mashes them together in bizarre ways — Breaking Bad meets SpongeBob SquarePants, anyone?
AI Central, which covers AI for the creator economy, notes that privacy protections for individuals and censorship of adult content seem fairly robust, writing that “strict controls prevent unauthorized use of public figures or adult content.” And while one can allow friends on the social network to use your likeness in their own videos, “users maintain complete control over their digital likenesses, with transparency tools showing exactly when and how their avatars appear in generated videos.”
On In the Flow, Joseph Augustine, who writes about media, entertainment, culture and their intersection with technology, went deeper, arguing that Sora forces us to rethink what ownership even means. “When you can type something as random, but enticing as ‘A Wes Anderson-style ad for toothpaste featuring Margot Robbie’ and have Sora spit out something polished enough to win a Cannes Lion, you start to wonder who owns what,” he wrote. “The IP creator? The inspiration? The generator? The requester? It’s complicated.” Whether that complexity will produce innovation or years of litigation remains to be seen.
The reaction in the entertainment industry is one of panic, according to Erik Barmack of the entertainment Substack The Ankler. With the first text-to-video models from various AI labs, he writes, early adopters “took the view that this tech could enhance storytelling without completely supplanting storytellers.” But with the release of Sora-2, “the industry’s quiet fascination snapped into open panic. Overnight, the film business saw what it had been enabling: a machine that could generate entire performances without paying, crediting or even asking the people it copied.”
And if you wonder why OpenAI may to be playing fast and loose, well, Varun Shetty, the head of media partnerships at OpenAI, came right out and told Newcomer — the Substack covering venture capital and startups by the former Bloomberg staffer Eric Newcomer — that in order to maximize the level of creativity that users could exercise, the company minimized the number of guardrails on the model.
A slop economy
Which brings us to a potential future drenched in synthetic media and its cultural implications. Which — no surprise — people have opinions on.
Among the most viscerally angry reactions was from Rahim Hirji on Box of Amazing, who spent over 2,000 words imagining how social feeds filled with AI slop will contaminate knowledge, warp our perception of reality and amplify inequalities. (AI slop, for those unfamiliar with the term, is the pejorative description of content made using generative AI, specifically when that content is produced in large volumes, is of poor quality, lacks originality and adds nothing of value.) “Probably we’ll just keep scrolling through the slop, mistaking fluency for meaning, engagement for connection and watching the machine talk to itself while pretending we’re still part of the conversation,” he wrote. Or it could be worse than that. Johan Michalove writes a Substack called resonetics, about AI, philosophy and associated deep thinking. In a post about Sora subtitled “Infinite meaninglessness,” he asks: “When the feed becomes an infinite scroll of never-events optimized for maximum attention capture, what happens to our ability to perceive actual events? To distinguish between what matters and what’s just engineered to be compulsively watchable?”
Then there is the issue of disinformation. In some ways, this might not be quite as huge a problem with Sora as you might expect because, as Max Read pointed out on his future-gazing Substack Read Max, "Sora won’t let you generate video of living humans or sexual scenarios." But it can still be used to create highly charged political content. Read reminded us of why this is deeply problematic: "The ability of nearly anyone on the planet to create believable video of anything, on demand, for free, is alarming not simply because it might be misleading to naive viewers, but, worse, because it makes cynical viewers of us all, suspicious of any video and disbelieving of what was once gold-standard evidence of 'what happened.'"
Royce Branning, founder of a startup that helps people quit doomscrolling, offered a more hopeful take on his personal Substack. His argument boils down to this: While AI may be powerful and persuasive, its capabilities aren’t sufficient to allow a single company to overpower the cultural will of society if society rejects the premise of what AI offers. He points out that the harms of social media have resulted in cultural pushback such as “school legislation” and “detox culture,” and that the same response may be coming AI’s way. “The awareness level of the cultural consciousness makes me hopeful that we can dance with, rather than diametrically oppose, these new technologies,” he wrote.
But the most optimistic — and arguably Pollyana-ish — take comes from Limitless, a new Substack that promises “dispatches from the frontier as humanity pushes forward into its next era of progress.” Rather than slop, it suggests Sora could be a path to self-discovery: “[Imagine] simulating high-stakes scenarios or situations that could happen in your own life before they happen and using the output video to inform which path you’d most likely want to pursue in the real world, reducing risk of failure and optimizing for your goals,” it suggests. Note to Limitless: Let us know when that happens!
The platform play
While the internet argued, OpenAI kept going. In fact, over the last few weeks, the company has rolled out not just a social network but a whole suite of tools and products that make Sora part of a much larger strategy. There’s Pulse, a proactive agent that studies your ChatGPT interactions and builds a personalized feed of ideas; a new marketplace of third-party apps such as Coursera for learning, Zillow for property searches and Canva for design — all accessible inside ChatGPT; and commerce tools that allow users to make purchases without leaving OpenAI’s interface. Finally, it launched its own web browser, Atlas, which has ChatGPT baked in.
As venture capitalist Saanya Ojha noted on her Substack, The Change Constant, Open AI “built the models, then the APIs, then the apps. Now it’s building the environment.” If the company succeeds, she argued, “ChatGPT becomes the default cognitive layer of the internet — the place where human intent gets parsed, priced, and routed.”
That may seem a strange choice for a company once devoted to building artificial general intelligence. But at its heart lurks the most fundamental of motivators: money. As Altman has admitted, building AI is expensive, and the company’s data center bills are staggering. These new ventures could become revenue engines that help fund OpenAI’s core research, even if it hasn’t explicitly stated how that might work just yet.
Newton draws a historical parallel here, which could serve as a cautionary object lesson. When Facebook began flooding the web with new features that it thought could one day translate to income, it — like OpenAI — had no clear path to monetization. Ultimately, Facebook had to rein in some of its ambitions after the Cambridge Analytica scandal revealed that it was providing third parties with unfettered access to user data. It’s easy for any company to make missteps when it races headlong into entirely new products as a sideline to its core business, and OpenAI, too, is experimenting at full speed — and with equally intimate user data, this time about what people think, search and dream about.
This is a whole new set of problems for OpenAI to wrestle with. It is also, some have observed, a distraction from its stated aim of building AGI. For the AI safety and policy Substack Transformer, one potential upshot is that diverted attention might delay the rise of a superintelligence built by the company. So perhaps a slop economy might have an upside?
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Notable Thoughts from Life Online
The dawn of the post-literate society, from Cultural Capital
The tech-related Substack post that seems to have captured the most attention this month comes from Cultural Capital. James Marriott, a columnist at The Times in London, wrote a paean to the act of reading and all that it has delivered to humanity and an indictment of the smartphone for being the cause of reading’s decline, along with the accompanying losses in various measures of cognitive ability. “This draining away of culture, critical thinking and intelligence represents a tragic loss of human potential and human flourishing,” he wrote, going on to argue that less reading means less creativity and weaker democracy, a shift that feeds “panic, hatred and tribal warfare” spread by podcasts and video. “Whatever happens,” he laments in his conclusion, “we are already seeing the world we once knew melt away. Nothing will ever be the same again.”
The Room Where AI Happens by Concurrent
Thoughts on The Curve by Interconnects
We’re All Behind The Curve by Transformer
These three posts document a recent AI conference called The Curve, held at Lighthaven, an event space in Berkeley, CA. The New York Times has called “the de facto headquarters” of the Rationalists, a group convinced that “artificial intelligence can deliver a better life if it doesn’t destroy humanity first.” The attendee list included Ben Buchanan, the former White House special adviser on AI; Yoshua Bengio, the Turing Award-winning researcher, and Jack Clark, a co-founder of Anthropic, among about 250 other AI industry insiders. Inside, discussions centered on the breathtaking pace of AI progress and how to protect humanity from it. As Transformer notes, the tone felt worlds apart from public debates about AI bubbles. Stepping out of The Curve, it wrote, “felt like coming back from the Moon.” Whether or not you share the existential worries of the attendees, these combined posts offer a glimpse into the closed-door conversations shaping how a small pool of influential thinkers believe AI should — or shouldn’t — evolve.
Goldman Sachs, Now Hiring… for OpenAI, from The Change Constant
OpenAI has begun hiring former bankers to help train its models to perform entry-level finance work — the kind of spreadsheet wrangling that junior analysts once sweated over at 3 a.m. Here, Saanya Ojha explores what that shift means. To make AI useful in complex domains like finance, law or life sciences, scraping text from the open internet will go only so far; eventually you need to capture the knowledge in the minds of those who make the institutions what they are. But once you do that, the traditional apprenticeship model collapses. And if AI systems learn directly from practitioners, every firm could gradually start to look more homogeneous, with the sparks of creative brilliance that once distinguished star performers drowned in the professional-services equivalent of slop.
Who Builds the Internet’s Infrastructure (And Why $1 Trillion Is Shifting to Them), from Global Data Center Hub
AI’s Grand Entanglement: The Subprime Dynamics of AI Compute, from Pramodh’s Substack
If you’ve ever wondered who actually owns and operates the servers behind your digital life, this post from Global Data Center Hub is an excellent primer. The hyperscalers — Amazon, Microsoft, Google, and Meta — may dominate, but they don’t build or control everything. A vast network of specialist developers, leasing companies and private-equity-backed infrastructure funds finance and maintain the rest. Yet the financial plumbing of this ecosystem is getting even stranger. As Pramodh Mallipatna explains in this second post, the infrastructure used to train and run AI systems is starting to exhibit problematic “circular” investment loops, a troubling financial feedback loop in which customers become suppliers to their own suppliers. Take a recent deal between OpenAI and AMD: “OpenAI’s cost center (chip purchases) [now] becomes an asset (AMD equity). The more OpenAI spends on AMD chips, the larger its potential return in AMD shares.” In other words, the same money that fuels AI’s growth is now also propping up its suppliers’ valuations. Also entangled in these sorts of dynamics are some of the biggest companies in the world: Nvidia, Google, Microsoft, Oracle, Broadcom and Samsung. Which might be fine, until a shock somewhere in the chain ripples through the entire ecosystem.
How Claude Code is built, from The Pragmatic Engineer
Claude Code, Anthropic’s AI coding assistant, has quickly become one of the most talked-about developer tools in the world. In this post, The Pragmatic Engineer’s Gergely Orosz sits down with three of its creators to unpack how it’s built, and how the system now does much of the work itself. Engineers at Anthropic, it turns out, are heavy users of their own AI: A pretty remarkable 90 percent of code in Claude Code is written by Claude Code. It’s a fascinating look at how AI is starting to design and maintain the very tools that produce the next generation of software.
What Is America’s Infrastructure Cost Problem? from Statecraft
We all know that infrastructure is incredibly expensive in the US. But in this interview for Statecraft, policy writer Santi Ruiz speaks with Zach Liscow, the former chief economist at the Office of Management and Budget, to unpack the structural explanations. Liscow points to three main culprits: burdensome permitting, procedural red tape (often around procurement) and a shortage of qualified public-sector staff. But his most striking observation is about data or more specifically, the lack of it. Even basic details, like a project’s proposed timeline or final cost breakdown, are often impossible to obtain. “It would take a tiny amount of funding to have much better data — to learn huge amounts about the hundreds of billions of dollars a year that we spend on this stuff,” Liscow writes.
Are you high-agency or an NPC? from Jasmine Sun
Jasmine Sun’s latest post is a look at the linguistic tropes shaping Silicon Valley’s self-image, a sort of new micro-dialect for the AI age that could make you cringe or completely baffle you. Perhaps both. There’s “agency,” meaning initiative, resourcefulness and a “high internal locus of control.” There’s “NPC,” borrowed from the gaming term non-player character, to describe those who “go about their quiet lives, playing LinkedIn Games and watching Marvel movies, blissfully blind to the technological tsunami mounting behind them.” And there’s the “permanent underclass” — that is, the people who fail to capitalize on the AI boom and are destined to be the automated rather than the automaters. It’s a discomfiting look at how Silicon Valley elites are thinking about the way that technology is reshaping society.