Newsletter / Issue No. 33

Image by Ian Lyman/Midjourney.

Newsletter Archive

16 May, 2025
navigation btn

Listen to Our Podcast

Dear Aventine Readers, 

Perhaps no sector has been more roiled by the arrival of ChatGPT than education, given the technology’s unnerving ability to do students’ work. Now, two years in, schools are beginning to wrestle with the long-term implications of AI in the classroom. Anecdotally, student abuse of the technology is rampant, at least at the college level. But the technology is so new that we don’t yet have a clear national or global picture of who is using it, how it’s being used and what effect it’s having on student outcomes. At the same time, preliminary studies show that Chat-GPT-like systems — if properly designed — could offer students significant benefits, especially those in classrooms with little opportunity for one-on-one attention. 

How should educators — and education systems — manage this? We spoke to a range of experts who are thinking about this question, and about what it means for learning and teaching going forward. 

Also in this issue: 

  • Behavioral science is reducing carbon emissions, one hospital meal at a time.
  • AI is learning to negotiate peace deals (though it’s got a lot to learn).
  • A dramatic rethink of Parkinson’s disease offers new hope for treatment.
  • And get ready for lab-grown chocolate! 
  • Thanks for reading, 

    Danielle Mattoon 
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    The Big Idea

    AI Is Reshaping Education. The Question Is, How Should Education Respond?

    A little over two years ago, panic gripped the education sector.

    At the end of November 2022, OpenAI launched ChatGPT. In the weeks and months that followed, students and educators alike experimented with what the powerful technology could do. Educators reeled, imagining classrooms overwhelmed by mass cheating that would stunt creativity and critical thinking, and expose students to greater levels of misinformation in the process. There was also the specter that large language models could, one day, put teachers out of work. The response from certain corners of the education sector was swift and severe: The Los Angeles Unified School District was the first major school system to block access to ChatGPT on its networks in December 2022. Seattle Public Schools followed later that month and the New York City Public Schools — the largest school district in the U.S. — blocked access in January. School districts in Maryland, New Jersey and Virginia, among others, soon followed suit. 

    Tutoring companies, meanwhile, saw an opportunity: Student learning, the thinking went, would accelerate thanks to personalized, machine-enabled tutoring that could cater to any student’s learning style. As Sal Khan, the founder and CEO of the online educational platform Khan Academy wrote in his 2024 book, Brave New Words, “the technology could enhance and enrich every learning domain … in ways no other tool can or does.” There was also a belief that AI-based tools could be used to augment the skills of teachers, helping them create more engaging lessons and lectures without adding hours to their workloads, in addition to providing useful feedback on student performance and analyzing classroom data. 

    Two years on, the intensity of those early reactions has diminished on both sides. Many of the school district bans on AI have been repealed or softened, a tacit acknowledgment that the genie is effectively out of the bottle and ignorance is not an option. “It’s irresponsible to not teach it,” Stephanie Elizalde, superintendent of the Dallas Independent School District, recently told The New York Times. “We have to. We are preparing kids for their future.” And while companies like Khan Academy remain bullish on the potential of AI as a learning tool, enthusiasm has been tempered by how difficult creating bespoke tutoring actually is. The company is “not saying it can do everything,” said Kristen DiCerbo, chief learning officer at Khan Academy. “We are trying to be a little bit measured.”

    Meanwhile, educators and students, particularly at the college level, have been raising alarms about the widespread use of Chat-GPT-like tools to write papers, and the lack of effective means to curb it. College instructors, too, are coming under fire from students for using the technology to prepare lectures and deliver feedback. And as with all forms of AI, there is concern that large language models often present false information as fact and perpetuate bias thanks to the data they are trained on — both obvious challenges in an educational context. 

    So where does all this leave things? It’s important to note that not even the most viral technologies reach 100 percent saturation overnight. Despite a sense that ChatGPT is omnipresent in schools and a danger to students who use it, at the pre-college level, at least, that does not seem to be the case. A January 2025 Pew poll of K-12 students reported that 26 percent of teens have used ChatGPT for some sort of schoolwork, up from 13 percent in 2023. And according to a different Pew poll conducted this time last year, only a quarter of K-12 educators believed AI will do more harm than good in K-12 education. Detailed statistics on use by college students are harder to come by: Most surveys suggest that use of the technology is pervasive among these students, but it's hard to discern what — from helping with research to flat out writing papers — they are using it for. 

    To get an understanding of how this nascent technology is being adapted, experimented with and regulated in educational settings, Aventine spoke with AI experts, educators and technologists grappling with how AI can and should be used for learning. We focused on the pre-college level because that is where much of the innovation is being directed. While acknowledging the potential for abuse when it comes to AI tools, many experts believe that AI could potentially help improve teaching. They also think that it needs to be part of curriculums so that teachers and students can learn to navigate both the dangers and upsides of the technology. What’s less clear, even among its proponents, is how deeply AI should be allowed to penetrate the fabric of education, and to what extent it will be able to, given that the sector is typically resistant to change.

    Business as usual, but with AI

    Educational AI is not monolithic and its tools also vary greatly in complexity. At one end of the spectrum might be software that can, say, rewrite a text for a teacher to suit a particular student’s reading level. The other end might be a personalized tutor that can be available to students 24 hours a day and provide help on any subject. The clearest dividing line, said experts, is between tools built for educators and those built for students.

    The attraction for teachers is pretty clear. Some tools can reduce prep time, perhaps helping with lesson planning or writing a quiz. Others can equip teachers with capabilities they might not otherwise have, like data analytics or the ability to create interactive media. It’s “the jobs that teachers do, or that [school] leaders do, with AI plugged into the back of it,” said Dan Fitzpatrick, another former teacher and author of “The AI Classroom.”

    Company representatives who spoke with Aventine downplayed the time-saving impact for teachers, preferring to talk about augmenting their capabilities. “What we're fundamentally trying to do is make really, really high quality lessons,” said Jens Aarre Seip, cofounder and CEO of Curipod, which builds products aimed at helping teachers use AI to design lessons and classroom activities. One obvious reason for this emphasis is that teachers’ concern that AI might someday automate their jobs is real, even though at this stage there is no indication that machines could be more effective than a teacher in a classroom. Teacher-focused tools also sidestep some of the biggest questions around bias, misinformation and misuse of AI in education. If AI is used behind the scenes, with a teacher in the loop, many of the ethical liabilities recede.

    For Eric Curts, a former eighth grade math teacher based in Ohio who now advises schools on how to work with technology, helping with the business-as-usual work of teachers is “the low- hanging fruit” of applying AI to educational settings. Helpful, yes. Transformative? Not really.

    A tutor in your pocket?

    The more ambitious — and more contentious — idea is that AI could reshape the student learning experience, especially when it comes to personalized support.

    In 1984, the educational psychologist Benjamin Bloom found that students who received one-on-one in-person tutoring using mastery learning — an educational approach in which students must demonstrate a high level of proficiency in one topic before moving on to the next — improved their performance by two standard deviations, moving students from the 50th percentile to as high as the 98th compared to a control group. Although the replicability of the Bloom results has been questioned, his findings are still held up as a goal by those looking to augment classroom teaching: Curts referred to this study in conversation and Khan cites it in his book. The reality in most schools, unfortunately, is that high-intensity one-on-one tutoring at scale isn’t economically feasible and routine one-on-one attention is hard to get. “If you're in a classroom with one teacher and 25 kids, it's really hard to get that little bit of support you need,” said DiCerbo. 

    Over the past two years, companies building AI systems aiming to provide high quality one-on-one support have proliferated. Brainly, Curipod and Khan Academy all offer products that provide some form of real-time feedback, as do many other companies, including Brisk, Chegg, Duolingo, Magic School, School AI, Quizlet and others.

    Still, the spectrum of implementation is broad. Some systems, like Curipod’s, use AI in limited ways. Teachers can, for instance, assign an in-class writing exercise that students then submit electronically; an AI assesses the work and prepares detailed feedback which — crucially — a teacher is then required to review and modify if necessary before it is shown to students. This can offer students more detailed and thorough feedback than their overworked teachers might otherwise be able to provide. Meanwhile, teachers serve as a backstop against the AI getting things wrong. “There will, at least in the foreseeable future, be that hallucination risk problem,” said Seip. “So the teacher needs to be the responsible person.”

    Others, like those built by Khan Academy and Brainly, offer more autonomous, always-on support. The specific products built by these companies vary, but they aim to be 24/7 tutors, coaching students rather than simply giving answers, and using the Socratic method to help students explore their understanding of various topics while adapting to individual needs. One benefit the companies cite is that students appear less embarrassed about asking AI for help. “The AI can re-explain and re-explain and use analogies and keep working with [a struggling] student,” said Curts. “Another [student] who gets it right away, the AI can let them go deeper into things they're interested in.” The hope is that by fine-tuning the AI coaching — making it more responsive and intuitive — it might be able to bring about the kinds of improvements highly skilled human tutors offer. 

    How good are today’s AI tutors? 

    There is early research suggesting positive effects of AI tutoring. A recent meta-analysis by researchers from Purdue University concluded that AI tools have a “significant positive effect” on students’ academic performance. But things get a little murkier if you look more closely.

    In a study from the University of Pennsylvania published in 2024, high school students were given a math lesson and then asked to solve practice problems with assistance from either class notes, ChatGPT or a version of ChatGPT that could provide only hints, not answers. The cohort using regular ChatGPT scored 48 percent better than those using class notes, while those using the “hints only” version of ChatGPT scored 127 percent higher. In a subsequent test of the material in which the students could not use notes or assistance of any kind, however, those who had used regular ChatGPT to study scored 17 percent lower than those who used class notes and those using the “hints only” version of ChatGPT scored about the same as the class notes group. The study’s authors concluded that “substantial work is required to enable generative AI to positively enhance rather than diminish education.”

    None of the companies developing AI-based learning tools that Aventine spoke with have yet to fully prove out their approaches. Curipod pointed to trials it has conducted in schools in California and Texas showing that its technology had a positive impact on test scores, but conceded that the trials were “quasi-experimental” because they weren’t fully randomized. DiCerbo explained that Khan Academy is still trying to validate its hypothesis that Khanmigo, a personalized tutor it developed to run on top of OpenAI’s large language models, can create the kind of high-quality tutoring interactions necessary for subject mastery, and subsequently lead to higher test scores. It still hasn’t got things working perfectly: “We can see some great examples of good tutoring conversations, [but] there's a lot more than we'd like where the students are typing ‘idk,’ ‘idk’, ‘idk,’” she said, describing the way students reply “I don’t know” in shorthand to the system’s prompts. “If kids are just typing ‘idk’ and not really engaging, we're not going to see great learning outcomes.”

    The ultimate metric used by these studies to measure impact is also worth considering. Learning outcomes as measured by test scores may tell only part of the story about the impact of AI tools. One small study published last year, for example, showed that students who used large language models (LLMs) like ChatGPT to research a scientific topic appeared to demonstrate weaker critical thinking than those who used regular web searches to undertake the same task, suggesting that it will be important to study potential unintended consequences of the ways in which students use the technology.

    Ultimately, DiCerbo said, running large-scale, randomized control tests of the technology is what's needed. Yet meaningful tests will be expensive and might require a full academic year to conduct, she added. The fact that the technology is little more than two years old is one of the main reasons more evidence doesn’t exist.

    Big questions ahead

    Some of the concerns about using LLMs in educational settings — the propensity for models to hallucinate and provide false information, say, or their potential bias as a result of the data they’re trained on — are ongoing research problems. Currently, safeguarding against such problems requires a human in loop, which most products being developed don’t include. Nevertheless, companies are attempting to dial down issues like hallucinations and bias through approaches such as careful prompt design, thorough testing and user feedback.

    But — as many experts made clear — such precautions are no substitute for being educated on the strengths and liabilities of AI systems. “There's a responsibility on education to teach the kids how to be critical users and how to account for misinformation, hallucination and bias,” said Trudi Barrow, who left her job as a teacher last year to work with schools and local authorities in the UK on how they adopt new technologies. “I think it's happening, but it's slow.”

    That speaks to a broader set of questions around how schools, districts and governments think about the adoption of AI in education. 

    Currently, adoption of AI tools in schools has been piecemeal, often driven by individual teachers or administrators rather than an overall strategy. Several of the experts who Aventine spoke with raised concerns about how unequal adoption could exacerbate existing inequalities. If companies are able to develop tools that demonstrably and consistently enhance learning without suppressing the development of critical thinking skills, students who gain access to those tools may have an academic advantage over those who don’t. “If a school has invested time and showed their students how to use AI in a beneficial way, every time that AI gets better, their capability gets better just by default,” said Fitzpatrick. 

    There are other governance issues to consider, too. Grass-roots adoption of the technology by single teachers makes it harder to establish rules around how it is used, Barrow said, which means districts will be better off deciding how they will use the technology sooner rather than later. Schools and districts that decide to incorporate AI into their curriculums will also need to think carefully about issues such as when students should be exposed to the technology, data privacy, security and staff training.

    Finally, there is a significant open question about how thoroughly mainstream education should embrace AI and how much change is even possible. Some of the experts who Aventine spoke to were full-throated in their view that education is at a crossroads, and that AI calls for a full rethinking of how we educate and what the goals of education are. “I think we're going to start to get more question marks about orthodox educational practices,” said Fitzpatrick. “What is education for? What is the school system for?” He imagines a future in which students learn more effectively and efficiently through working with AI tutors, freeing both teachers and students to explore more social and creative forms of learning in the classroom.

    Others are skeptical that the kind of change Fitzpatrick describes is possible. “Education is very resistant to that kind of disruption,” said DiCerbo. Several experts pointed to the fact that the bureaucratic machinery undergirding governments, school districts and even individual schools would prevent any fundamental changes to the ways that children are educated. 

    “It's difficult to imagine every school in the country doing something radically different in the next three to five years,” DiCerbo added. Then she paused, adding: “But probably every industry that's ever been transformed has said the same thing before they got transformed.” 

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Quantum Leaps

    Advances That Matter

    Lab grown chocolate Gracie Malley / California Cultured

    Lab-grown chocolate is coming, and could beat the real thing. In a lab in Davis, California, brown sludge inside a cluster of conical flasks could be the future of chocolate. According to New Scientist, this new substance, cultivated by the food-tech startup California Cultured, may help rescue the chocolate industry, which is facing growing instability due to shortages of and price hikes for cocoa beans. Using a few cells from a cocoa plant, researchers at this startup, as well as other companies, have developed methods to grow cocoa in bioreactors. This liquid suspension of cocoa cells can ultimately — just like conventional beans — be fermented, roasted, and processed into cocoa butter and solids, the essential building blocks of chocolate. The process could supplement increasingly strained global cocoa supplies: Prices for the raw ingredient have tripled in two years due to surging demand and shrinking yields, exacerbated by climate change. Lab-grown cocoa also offers potential environmental advantages: Unlike traditional cultivation, which often contributes to deforestation, this method requires no farmland. And early tests show that cultured cocoa may contain higher levels of polyphenols, compounds associated with chocolate’s health benefits. Challenges remain; cultured cocoa currently yields less butter — needed for chocolate — than its farm-grown counterpart, and while production costs are lower than for other lab-grown foodstuffs such as meat, they will need to fall further in order to compete with the real thing. Still, if the cost of conventional cocoa continues to climb, parity may be increasingly easier to achieve. As for the taste? “It smells like dark chocolate and tastes like it, too, but better — less bitter,” writes New Scientist’s Michael Le Page about chocolate from California Cultured. “For me, there is no doubt that this is the real thing.”

    AI is learning to negotiate peace deals. Negotiating a geopolitical agreement like a peace deal is complex — a high-stakes dance involving optimization, compromise, cultural nuance and intuition. It is perhaps not an endeavor one would expect machines could help with. But according to The Economist, several global initiatives are exploring how AI could shape more effective diplomatic negotiations. One effort, spearheaded by the Center for Strategic and International Studies, a Washington think tank, is attempting to build an AI model to assist in talks around the war in Ukraine. The model is trained on data taken from the outcomes of a tabletop strategy game played by dozens of foreign policy experts, along with the texts of 374 past peace agreements and ceasefires and a corpus of media coverage related to the Russia-Ukraine conflict. Users of the model will provide preferences on issues like territorial control, sovereignty and economic conditions, and the model will then generate drafts of peace proposals, flagging the elements likely to be acceptable, negotiable or contentious for different parties. The idea is to offer diplomats immediate feedback on the viability of various proposals — feedback that traditionally requires back channels and guesswork. Yet experiments by CSIS in which large language models were tasked with solving diplomatic problems reveal their limitations: Some exhibit aggressive tendencies, frequently suggesting the use of force to resolve disputes, while others yield to the opposition too easily. Understanding these quirks could help fine-tune models to walk a more nuanced and sophisticated line, as could the inclusion of concepts like game theory, which may help models predict potential outcomes of their proposals. These AI systems aren’t yet ready to take a seat at the table, but their development suggests that even this most human of processes may one day be augmented by machine intelligence.

    Behavioral science can cut carbon, one hospital meal at a time. A little over two years ago, 99 percent of patient meals in New York City’s public hospitals contained meat. Today, fewer than half do, a shift that has cut carbon emissions related to patient meals at these hospitals by 36 percent. The secret? Not fake meat or banned ingredients, but nudge theory, a behavioral science concept that subtly steers people to make decisions that are more likely to benefit them or society. Canary Media reports that patients at these hospitals are first offered a set of meals recommended by the chef, all of which are vegetarian. If none appeal, the patients are offered a second set of options, which are also all vegetarian. Only after that are meat options presented. The result: More than half of patients choose a meal in the first two offerings. Language plays a quiet but important role: Instead of flagging meals as “vegetarian,” “healthy” or “sustainable,” menus use descriptors like “hearty” to focus on enjoyment rather than virtue. Recipes have also been redesigned to reflect the tastes of New York’s diverse population. In addition to reducing emissions caused by cultivating animals for meat, vegetarian meals are 59 cents cheaper on average per meal than their meat-based counterparts, which has helped save the city $1 million over the two years that the project has been running. While lab-grown meat and other innovations have promised a lot and delivered very little in terms of carbon reduction, the New York City hospital approach shows that subtle behavioral nudges could, if applied in other institutions — such as schools, colleges, staff cafeterias or prisons — have a notable impact on the carbon footprint of the food industry.

    Long Reads

    Magazine and Journal Articles Worthy of Your Time

    Science, Promise and Peril in the Age of AI, from Quanta
    About 24,000 words, or about 1.5 hours, across nine stories

    Artificial intelligence was born out of science, inspired by biology and made real by math and physics. Then it became a powerful research aid, helping identify patterns in data that were impossible for humans alone to spot. Now its role in science is becoming even more complex. It has become, as this package of stories from Quanta explains, “a junior colleague, a partner in creativity, an impressive if unreliable wish-granting genie.” Across nine stories, drawing on almost 100 interviews, this issue describes how artificial intelligence is changing the way researchers and institutions think about and do science, for better and worse. A couple of Aventine favorites among the collection are an exploration of how AI is changing the work of mathematicians — “instead of spending most of their time proving theorems, mathematicians will play the role of critic, translator, conductor, experimentalist” — and a close look at how the technology is also helping researchers to develop new hypotheses and experiments. And if there’s an equivalent of a back-of-the-magazine interview to this collection of pieces, it’s a delightful roundup of perspectives from researchers on the impact AI will have on their fields over the next five to ten years. Scientists, it turns out, are just as worried as anyone else about the impact AI might have on their jobs.

    A dramatic rethink of Parkinson’s offers new hope for treatment, from New Scientist
    2,400 words, or about 10 minutes

    ​A new way to think about Parkinson’s disease is emerging: Rather than a single disorder, it may actually be two biologically distinct subtypes — “brain-first” and “body-first” — with different implications for how it is diagnosed and treated. In the brain-first variant, Parkinson’s appears to begin in the brain, with early symptoms like tremors, stiffness and other classic motor impairments. The body-first form, by contrast, may start in the peripheral nervous system, often marked by subtle and easily overlooked symptoms such as constipation or incontinence. As New Scientist reports, this growing recognition of Parkinson’s diversity could shift the way researchers and clinicians approach the disease, potentially facilitating earlier diagnoses for those with body-first Parkinson’s and more personalized treatments. The problem is that very little research has so far studied the disease through this lens, which means it’s currently unclear what those new approaches for diagnosis and treatment might be. That’s beginning to slowly change: Some researchers are now investigating links between the gut microbiome and Parkinson’s symptoms, for instance. But clinical breakthroughs are likely still years away.

    Suddenly Miners Are Tearing Up the Seafloor for Critical Metals, in Scientific American
    4,700 words, or about 19 minutes

    ​In a remote part of Papua New Guinea's Bismarck Sea, a vessel named MV Coco is quietly testing a new form of resource extraction: deep-sea mining. Operated by a company called Magellan, headquartered in Guernsey, one of the Channel Islands, the ship is equipped with a 12-ton hydraulic claw, designed to grab metal-rich deposits from the seafloor. The goal is to assess the ocean floor for concentrations of copper, gold and other critical minerals — minerals that are limited in quantity on land and enormously important to modern technologies. These deep-sea reserves are so far untapped. The article explains that operations are technically permitted under a 2011 license from Papua New Guinea’s mining authority, but that local officials seem unaware of the scope of activity. The article also reports that Magellan is excavating more than three times the approved volume of seafloor material for its assessment. Aventine has written about the unresolved tension between the urgent need for rare seafloor minerals to power clean energy and the damage mining such minerals could have on ocean ecosystems. This article illustrates some of the Wild West practices that will likely proliferate without more research and rulemaking around this potential new industry.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.