Newsletter / Issue No. 64

Image by Ian Lyman/Midjourney.

Newsletter Archive

Thu 26 Mar, 2026
navigation btn

Listen to Our Podcast

Dear Aventine Readers, 

We wrote about the hopes and fears surrounding AI’s impact on education last spring, at a time when it seemed like a relatively fresh development. Now, less than a year later, it feels like a long-embedded challenge: how to prevent AI from stifling students’ intellectual development while also teaching them to be fluent masters of the technology? 

A recent string of Substack entries refreshes the fears, thanks in some part to posts by a neuroscientist claiming to have discovered links between the use of technology in schools and declining test performance. The science behind his assertions has not been evaluated, but his argument struck a nerve and his voice joins a chorus of others asking whether we need to fundamentally rethink the role technology plays in how students learn. 

More from Substack: 

  • China is more fragile than it seems.
  • How iPhones, not ATMs, killed bank teller jobs. 
  • Reframing energy for the age of electricity. 
  • And how to cultivate employees who make things happen.
  • Until next week!

    Danielle Mattoon 
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    Views from Substack

    Should Technology Be Banned in Classrooms?

    “It doesn't matter what the size of the screen is — if it's a phone, a laptop, a desktop. It doesn't matter who bought it. Is it school-sanctioned? ... It doesn't matter,” Jared Cooney Horvath, a neuroscientist, told the U.S. Senate Committee on Commerce, Science, and Transportation earlier this year. “These things will hurt learning which will in turn hurt our kids' cognitive development.”

    Horvath is the author of three books, the latest of which, The Digital Delusion: How Classroom Technology Harms Our Kids’ Learning — And How To Help Them Thrive Again,” was published last year. In January he was invited to make his case on Capitol Hill.

    Horvath, whose first book was titled, “Stop Talking, Start Influencing: 12 Insights From Brain Science to Make Your Message Stick,” has also been taking his case to Substack, successfully sparking a fairly one-sided debate on the platform about whether technology broadly — and AI more narrowly — belongs in schools. His contention is that the introduction of technology into classrooms around 2010 has created a generation of students who will enter the workforce fundamentally less equipped than their predecessors across “basically every cognitive measure, from basic attention, memory, literacy, numeracy, executive function [and] general IQ.” 

    This is an obviously heavy-handed approach that ignores questions about different types of technology, the time of introduction and how students will learn to master the dominant technology of our time without access to it. Nevertheless, the mostly positive responses to his post reveal educators who are deeply skeptical of technology and especially AI. 

    A fight over edtech

    A chunk of Horvath’s argument rests on a decline in IQ scores. Throughout the 20th century IQ scores rose steadily by about three points per decade, a phenomenon known as the Flynn effect. Then, around the turn of the millennium, the trend reversed. IQ isn't a perfect measure of intelligence, and there's debate about what's driving the decline. But Horvath cites it as a signal that "despite [Gen Z] spending more time in school than any generation before," the "values, habits, and cognitive skills that education once reliably supported are no longer being cultivated in the same way." Technology, he argues, is the reason why. 

    In a series of Substack posts Horvath also claims to have identified a relationship between digital technology adoption in US states and changes in National Assessment of Educational Progress (NAEP) scores. "Across state after state, scores in both 4th and 8th grade rose steadily for many years prior to large-scale digital adoption,” he writes. “After adoption, however, the trajectory shifts — often sharply — toward decline." Horvath claims the same pattern appears in international assessments.

    Not surprisingly, advocates for tech-free learning have run with Horvath’s findings. The Child First Policy Center, a parent-led policy organization “focused on protecting children and the environments where they learn” cites the research in a Substack post titled “What the Research Is Finally Revealing About Classroom Technology.” Alison Yeung, a doctor “educating and empowering parents to raise resilient kids in a tech heavy world,” cited the work in another post titled “As Classroom Tech Goes Up, Learning is Going Down.”

    Not everyone buys it. Adam Sparks, a former teacher now building an edtech writing tool, dismantled Horvath's work on his Substack, Edtech Confidential, writing that Horvath "makes some valid points," but that "they're buried beneath cherry-picked statistics, correlational evidence, logical fallacies, straw men, and moral panic." A key issue: Horvath doesn't establish causality and there are plenty of factors aside from technology that could explain declining test scores. Another: Horvath oversimplifies how technology is used in schools, treating "edtech" as a monolithic category when the reality is far messier. 

    When machines do the thinking

    Putting aside the specifics of Horvath’s argument, he has prompted a broader discussion about the potential damage of outsourcing intellectual effort to technology and eroding the transformative process of learning. 

    Auron MacIntyre, a right-wing commentator who once taught history in Florida public schools, recalls the way Google Chromebooks changed his classroom on his Substack, The Total State. "Cheating became routine. Students search answers in seconds," he wrote. “The larger problem went beyond quizzes. Googling replaced thinking. Kids refused to read because they assumed a quick search and a copy-paste counted as 'learning.'" AI could turbocharge that.

    Emmarae Stein, a PhD student in history at the University of Rochester, contributed an essay to Cracks in Postmodernity about the moment in her first semester of college when a professor told her she should pursue writing. That feedback, she argues, was meaningful because she’d struggled to produce the essay. "If ChatGPT were to complete or partially write my personal essay," she wrote, "my professor's comment would not have held the same significance: I would have known that I did not complete the work on my own, and his comment would have reinforced the idea that I needed to use AI to create writing of quality. Most significantly, I would have never experienced the important and even life-changing sense of satisfaction at having created something deemed to be good."

    To some extent, the direction of education might depend on how its purpose is defined. We often think of it as a process of betterment for the individual. But Thor Hogan, a politics professor at Earlham College, argues on Thor's Forge that American education has long been distinguished by its ability to produce workers capable of "innovation, adaptation, and complex problem solving." Those capabilities, he writes, are what allowed the US to move beyond resource extraction and low-wage manufacturing into more productive sectors. Education, in this view, wasn’t so much a tool for making individuals smarter as an economic lever that the government got to pull.

    Then there’s the argument that technology itself requires an education divorced from technology. In Wisdom in the Machine Age, Lily Abadal, a professor in the philosophy department at the University of South Florida St. Petersburg, argues that we may need to create AI-free classrooms in order to teach skills that can stand up in an age of artificial intelligence. “An AI-free classroom is not a rejection of technology. [Its] existence shouldn’t be reduced to obsessions about policing students or being in control,” she writes. “We want to form people who can question the designs that are sold as necessary, the motives that are sold as genuine, and the structures that are sold as inevitable and beyond our control. … AI-free classrooms [would] help our students step back and see such questions more clearly.” In other words, we may need to deliberately design friction back into classrooms to make sure people can still think for themselves.

    There's a version of learning in which AI acts as a universal yet bespoke tutor, providing feedback and personalized education across entire populations, vastly increasing the reach of high-level academic support. Doan Winkel, an educator, entrepreneur and technologist, makes the case for this repeatedly on his Substack, How to Teach With AI. He is also an advocate of using AI to make teaching more specific and individualized, for example by encouraging teachers to use Claude Cowork to build customized activities for every student in their class in order to increase engagement and personalization. 

    But recently, at least on Substack, the tone has been firmly against using screens and AI in the classroom. Without any decisive evidence on how technology affects learning at scale, one thing seems clear: Decisions about what benefits and does not benefit students will remain almost entirely subjective. 

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Substacks in Brief

    Notable Thoughts from Life Online

    China is quietly looking weaker, from Noahpinion

    For years, it’s felt almost inevitable that China would overtake the US as the world’s dominant power. Noah Smith is starting to question that assumption. The country’s push into high-tech manufacturing — electric vehicles, batteries, robotics — has delivered huge gains. But that zealous approach has also created serious problems: overproduction, price wars, thin margins and growing financial strain in the banking system. What looks like industrial strength may, in part, be built on fragile economics. At the same time, China’s strength in manufacturing has long depended on huge pools of engineering talent and accumulated know-how. Advances in AI could erode that advantage, Smith argues, making it easier for other countries to replicate capabilities that once took decades to build. There are also political risks: Smith argues that Xi Jinping, as he gets older, may become more paranoid in the way that other dictators have in the past, potentially introducing new forms of instability at the top of the system. None of this suggests China is in decline, or close to collapse. But it does complicate a once-straightforward narrative. 

    Anthropic employees say they’ll give away billions. Where will it go? from Transformer

    A wave of wealth is about to hit the AI world, and much of it may be given away. As Anthropic moves toward a potential IPO, its employees stand to make billions. Many of them — including all seven co-founders — have pledged to donate large portions of that wealth, often in line with the Effective Altruism movement’s focus on doing the most good. The question is where that money will end up. This post argues that it’s likely to flow toward organizations that already share Anthropic’s worldview: AI safety nonprofits aligned with its research agenda, donor vehicles like Coefficient Giving (formerly Open Philanthropy), and political efforts aimed at shaping AI regulation that the company supports. When both capital and ideology are concentrated in the same networks, the attention of a like-minded group of philanthropists is likely to have an outsize effect. 

    Why ATMs didn’t kill bank teller jobs, but the iPhone did, from David Oks

    Economists often point to ATMs as a reassuring example of how technology affects jobs. When they spread across the US, bank teller numbers didn’t fall — they rose. Automating cash handling made branches cheaper to run, which meant banks opened more of them. Tellers were repurposed into customer-facing roles, and total employment increased. But that’s only the first half of the story. As David Oks points out, smartphones then came along and changed everything. Our phones didn’t just automate more parts of a teller’s job, they made the bank branch itself less necessary. Once customers could manage their finances through apps, the need for in-person service collapsed. Accordingly, teller employment has plummeted, from roughly 332,000 in 2010 to around 164,000 by 2022. What’s the lesson for the moment we’re living through? Technologies that slot into existing workflows to replace specific tasks may reshape jobs without destroying them. But technologies that fundamentally change the structure of an organization and eliminate the need for those workflows altogether can decimate occupations.

    Ten Thoughts on Government Data, from Statecraft

    It’s easy to assume governments know more than they do. This post, drawing on insights from Violet Buxton-Walsh, a fellow at the Institute for Progress think tank who has been working with government datasets, explains why that assumption is unwarranted. For one, the data itself is incomplete: “Gaps occur on every level,” she writes, as government officials “decline to write down valuable information, neglect to write down everything we’re supposed to, and fail to hold on to everything we once wrote down.” For another, government systems that store data aren’t built for analysis. Many function less like databases and more like audit trails: good at answering predefined questions, but hard to use when asked anything new. Extracting insights often requires reworking the data in ways the system was never designed to support. And even when the data can be understood, communicating it is another challenge entirely. “Trying to elucidate statistical subtleties in a policy context is usually a losing battle,” she writes. Anybody who’s trying to share data with policymakers, she writes, should “assume [they]’re talking to an audience of 5th graders.”

    Reframing Energy for the Age of Electricity, from The Electrotech Revolution

    Most energy analysis starts from the supply side: take a fuel like coal and trace where the energy ends up. But that approach often fails to capture something that increasingly matters in an electrifying world: efficiency. This post argues for flipping the perspective by starting with demand. What is energy actually being used for: heating things up, moving machinery, pushing electrons around computer chips? And in what form does it arrive: as molecules burned for fuel, or as electrons flowing through a grid? Thinking about energy supply in these terms, the authors argue, can make it easier to identify where inefficiencies creep in, especially when energy is converted multiple times between heat and electricity. That matters more as renewables, which generate electricity directly and efficiently, become a larger share of the mix. In other words, this framing helps identify instances in which energy systems are unnecessarily wasteful, and when switching technologies could deliver the biggest gains.

    Increasing the supply of people who just do things, from Stephen Kinsella's Newsletter

    In the Bay Area, “agency” — the ability to take initiative and get things done — has reached Holy Grail status. This post asks: Can you produce more of it? Stephen Kinsella, an economist at the University of Limerick, argues that what sets highly effective people apart is not talent so much as accumulated confidence. That confidence is built through experience and is unevenly distributed. Some people never develop it because it’s suppressed culturally, because they lack access to tools and training, or because they’re excluded from opportunities due to a lack of credentials. If that’s true, then agency isn’t fixed, but something institutions can cultivate. Kinsella suggests a few ways to do that: Give people real responsibility early, so they feel accountable; provide feedback quickly, so they can learn and adapt; expose them to large, diverse networks, so they can test ideas and find collaborators; and push them into ambiguous situations in which they’re forced to figure things out for themselves. The main message is that if you want more people who “just do things,” you can build the conditions that create them.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.