Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
Last year, a murder victim appeared before a courtroom in Arizona because the judge allowed an avatar of the victim, created by his family using generative AI, to address the court and his killer. That’s just one example of how generative AI is transforming the legal system. Meanwhile, new AI tools are competing to streamline legal work and everyday citizens are using AI to try their own cases in court. Read on to learn about what this could mean for the future of the law.
Also in this issue:
Sincerely,
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
When LLMs Show Up in Court
Practicing law involves a heavy load of paperwork, so it makes sense that generative AI has already made inroads here. A recent survey by a law profession software company estimated that 7 out of 10 lawyers now use some form of generative AI in their work, typically for summarizing research, writing documents and even for some legal strategy. The two largest legal research platforms, Westlaw and LexisNexis, have both incorporated AI tools for research, analysis and drafting. For criminal defense attorneys, there are tools like JusticeText and ReductVideo that can quickly make searchable transcriptions of video and audio evidence like surveillance footage, body-cam photos and jailhouse calls, but they have yet to be widely adopted. And the market for AI tools is expected to keep growing: In 2025, there was a record $5.99 billion invested in legal tech startups.
Could generative AI someday replace lawyers? Apparently, some lawyers think so. A bill working its way through the New York State Senate would declare generative AI an unauthorized practitioner of law, prohibit chatbots from dispensing legal advice and impose liability on AI companies that violate the law. New York is following the lead of California, which has already imposed restrictions on the use of generative AI and other algorithmic systems in the courts.
We reached out to some experts and practitioners to assess the current state of generative AI and the law: how AI is being used in law offices and courts, the problems AI presents for judges and how it might affect the legal process in the future.
How often are AI hallucinations finding their way into court?
A lot. Across the country, judges have sanctioned and even fined lawyers who submit AI-written briefs containing fake citations. Damien Charlotin, a legal consultant in Paris who writes a Substack called Artificial Authority, has compiled a list of more than 1,030 court cases (as of this writing) that have included some form of AI hallucinations, typically in the form of made-up citations, but there are instances in which the citations are legitimate but other elements in the argument are unreliable. His list has become a compulsive, “there but for the grace of God” must-read for attorneys and judges.
Victoria Kolakowski, a superior court judge in Alameda, California, has seen many examples of AI in her courtroom, including fake citations, especially from self-represented litigants. “They come in, cite a case and when you look at it, the case isn’t real,” she said. “They say, ‘I believe the AI.’” The good news, both for litigants and judges, is that newer versions of LLMs seem to produce fewer hallucinations.
Then there’s fake video evidence. Last May, the judge was in her office reviewing submissions in a tenant-property manager dispute, including videotaped testimony from a neighbor of the tenant supporting their claims against the landlord. The video was jittery, the neighbor’s movements seemed unnatural, and the audio and video didn’t match.
“My research attorney and I looked and went, Is this right? Is this what it looks like?” she remembered. What it looked like was a particular poorly executed AI video. “What do we do about this?” She ended up dismissing the case.
How can courts verify AI evidence?
Edward Cheng, a professor at the Vanderbilt University Law School and co-author of the five-volume guide “Modern Scientific Evidence,” told Aventine that the deepfake problem has a parallel in the introduction of photography in courts.
“Historically, the legal system had a really hard time dealing with photographic evidence [because] photographs are witnesses but they're not cross-examinable,” he said. The fix in those days was to require “a human witness to say that the photograph [was] a fair and accurate representation of whatever it purported to be.”
Cheng said that AI-generated evidence is going to require similar backstopping. “Pretty soon — or maybe it's already — we should not believe any photograph in court, unless you have a human witness backing it or you have some kind of proof,” he said. “You can't trust technology anymore. You have to go back to the human witness. So we come full circle on this issue.”
Can AI help address the justice gap?
The term “justice gap” (sometimes called “the access to justice gap”) refers to the vast disparity in legal resources available to low-income and high-income litigants, especially in civil courts like housing, bankruptcy and family. According to a 2022 report by the Legal Services Corporation (which helps fund Legal Aid clinics), 92 percent of Americans facing civil legal problems had little or no help from lawyers.
The great hope is that AI will ease the caseload for Legal Aid lawyers, public defenders and advocacy lawyers. Dyane O’Leary, the director of the Legal Innovation and Technology Center at the Suffolk University Law School in Boston, told Aventine that this hasn’t happened yet. Resources are scarce for those lawyers and records for state courts, unlike those of federal courts, are often fragmentary or unavailable online. “AI is not yet in those workflows, as I understand it,” she said. “Some are maybe getting updated Lexis and Westlaw features but again, the AI features in these tools are really expensive and courthouses and public entities move slower in terms of budgeting and adoption.”
Which doesn’t mean individuals aren’t taking advantage of the technology. There are no hard numbers, but legal observers say there is an increase in AI-written documents showing up in court, both from lawyers and from self-represented litigants (called pro se litigants) operating without an attorney. Lynn White, who faced an eviction notice from her trailer park in Los Angeles, told NBC News that she used the free version of ChatGPT and a paid version of Perplexity to ask for a deferral under a pandemic-era law. She won the deferral and told NBC that ChatGPT even identified potential errors in a judge’s ruling.
“I can’t overemphasize the usefulness of AI in my case,” she said. “I never, ever, ever, ever could have won this appeal without AI.”
Housing issues come up so frequently in civil courts that one lawyer is using AI to streamline queries about tenant rights. Sateesh Nori, a senior research fellow at the Center on Civil Justice at New York University School of Law who represented tenants for 23 years, said that there just aren’t enough lawyers to help clients, so in 2025 — working with an Australian company called Joseph Legal — he released an AI chatbot called Roxanne.AI to help tenants get repairs.
“How do I get my heat on? How do I get my elevator fixed? What happens if there's roaches? ” asked Nori rhetorically, adding that many of his clients don’t even realize that these are legal questions. “Many people don't know how to get the answers and if they don't have the information, they can't act to remedy those problems.”
Roxanne.AI uses what is called Retrieval Augmentation Generation (RAG), which narrows queries to a specific set of documents — in this case, the jumble of housing regulations and advice that can be found on New York City government websites, Legal Aid sites and state statutes.
Roxanne.AI (which does not collect personal data) has been active for a year and Nori estimates there have been more than 1,000 users. Nori also worked on another RAG called Depositron, which helps tenants get security deposits back from landlords. Both Roxanne.AI and Depositron, he said, are first steps toward a much bigger vision.
“There is this bigger picture moonshot idea, which is to give people legal answers to any question they may have anywhere they might be. Think about it like Google for legal,” he said. “A clean, open-search bar that understands context and geography and can help people navigate simple or even complex legal problems. And in the back end, it would be lawyers and legal professionals, law students, law firms, that would make sure that the right information is built into it.”
Could generative AI affect courtroom decisions?
Generative AI can create a compelling argument almost instantly, convincingly edit images to include only some people or actions (despite the bad video fake mentioned earlier), remake scratchy audio evidence and transform grainy surveillance video into something resembling a scene from a movie. What should be allowed in court?
Last May, an unusual victim impact statement was introduced in the sentencing phase of a manslaughter trial. Christopher Pelkey, a 37-year-old Army veteran, had been shot and killed during a road-rage incident. The family submitted an AI-created video statement from the deceased Mr. Pelkey, addressed to the man who had killed him. The judge allowed it.
“It is a shame we encountered each other that day in those circumstances,” said the image of Mr. Pelkey, appearing in a baseball cap and combat-green hoodie. “In another life, we probably could have been friends.” Mr. Pelkey’s avatar also thanked the judge, who said he “loved” the video and sentenced the attacker to the maximum sentence of 10 and a half years. This was the sentence Mr. Pelkey’s family had requested, more than the nine years recommended by the prosecution.
Some lawyers and observers disagreed with allowing the video. The use of persuasive AI in court could open the door to, for example, a detailed but very one-sided AI recreation of a car accident based on surveillance tape. During investigations, police could use AI-generated audio to convince suspects that someone had accused them of the crime.
“There are evidentiary rules governing what you can and can't say and claims that you can or can't make. And if something is not in evidence, there are limits to how much you can hyperbolize it,” said Mitha Nandagopalan, a lawyer with the Innocence Project who oversees the organization’s strategy on emerging surveillance technologies. “That's because our legal system is, at least on paper, designed to try to ensure that outcomes have a close relationship to ground truth.
“We risk eroding that if things like generative AI videos of a hypothetical situation that never actually occurred or a statement that was never actually given are used. Not just in front of juries or judges necessarily, but in interrogations or in interviews with witnesses.”
Will AI mean fewer lawyers?
Dario Amodei, the chief executive of Anthropic, has said that generative AI could eliminate half of all entry-level white collar jobs within five years, including jobs for lawyers. So far, there aren’t signs that lots of lawyers are losing their jobs. In 2025 The American Bar Association reported a record number of active lawyers in the US — just over 1.37 million, up from 1.35 million in 2024.
That said, many big firms have pulled back on hiring in the past few years and the proportion of associates within firms is lower. Is it all due to AI? Probably not. There have been a wave of mergers in the industry along with worries about profitability that would make firms conservative about hiring.
Law students are openly worried about the future of the field. Dyane O’Leary said that, for the first time, she is sensing some job anxiety from her students. “You know, I want to say I don’t. But I'll be honest. This year in both the electives I've taught upper-level students, we're talking about how to use this technology and they're really experimenting and frankly being blown away by what it's doing,” she said. “They are aware [of possible job replacement], and I think for the most part, they do feel nervous.”
The shift to more AI isn’t necessarily bad news for paralegals, however, even if big firms are cutting back on hiring new lawyers. The largest technological disruption in the legal profession until now was the introduction of the database services, Westlaw and Lexis-Nexis in the 1970s, which didn’t result in unemployment for paralegals. Instead, it rescued them from the law-library stacks and allowed them to perform higher-value work like preparing for trials and interviewing witnesses.
If AI is doing the job of lawyers, should it receive the same protections?
Two recent court cases addressed the legal status of AI and courts with divergent rulings. In Heppner v. US, a New York judge ruled that the defendant’s use of Claude was not protected under attorney-client privilege. In Warner v. Galbarco, a Michigan court ruled that a litigant’s use of ChatGPT was protected under the work product doctrine, which shields trial preparation from opposing parties.
The issue of whether LLMs can dispense legal advice will certainly continue to be fought over in the courts. None of the experts we talked to believed that chatbots have the full rights of a human attorney. But Sateesh Nori came close.
“I’m not there yet,” he said. “In the beginning, I think a lot of us in this space were thinking, ‘how do we make lawyers better? Let's give tools to lawyers that make them work a little bit faster.’ But there's never going to be enough lawyers. So why not say for an autonomous chatbot that's acting as a lawyer, any communications with that entity are privileged. I don't think it's a stretch.”
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Advances That Matter
Drone warfare has reshaped frontlines beyond recognition. In fact the Russia-Ukraine war no longer has a traditional frontline. It has been replaced, the Financial Times reports, with what some military officials now refer to as a “kill zone” — a 12-mile strip of land where “anything that moves can be instantly targeted and destroyed” by drones from either side. Ukraine claims that its drones now destroy more than 80 percent of enemy targets, fundamentally reshaping how war is fought. (It has also honed defensive weapons and techniques — hardware and experience that is now being sought by the US and Gulf states to use against Iranian drones.) Vehicles are used sparingly in the kill zone, for fear of aerial attack. Troops move along roads only under cover of poor weather — snow, fog, heavy rain — when visibility is too limited for drone operators. Ukraine has shrouded hundreds of miles of roads in protective netting to create makeshift drone-proof corridors, and high-priority vehicles are encased in metal anti-drone cages and fitted with electronic jamming systems. But even jamming is no longer enough: A new generation of fiber-optic drones is tethered to controllers by hair-thin cables stretching more than 25 miles, creating a physical data link immune to electronic interference. Towns near the front are increasingly enlaced in these spiderweb-like strands. Ground warfare is changing too, with remotely operated vehicles deployed to resupply troops and evacuate wounded soldiers. Behind the scenes, as New Scientist reports in a dispatch from Kyiv, a rapidly evolving drone industry inside Ukraine is constantly bringing new technology to the battlefield. New six-wheeled remote-controlled all-terrain vehicles that cost around $55,000 can keep moving even after losing two wheels, for instance, while vertical-takeoff drones can pivot into horizontal flight to reach speeds of 190 miles per hour and function as low-cost precision missiles. One developer told the magazine that meaningful upgrades now arrive every few months — each iteration quickly rendering previous tactics obsolete.
A new geothermal plant doubles as a lithium extraction facility. On an industrial estate outside Redruth in Cornwall, UK — wedged between a crane-hire depot and an interiors showroom — an unusual clean-energy experiment is underway. Built by Geothermal Engineering Ltd, the UK’s first commercial geothermal power plant pumps water through hot granite nearly three miles beneath the surface. There, temperatures reach around 370°F. The heated water is brought back up to drive turbines and generate electricity. But power generation is only part of the story. As The Guardian reports, the geology of this site means that the superheated liquid becomes enriched with lithium as it circulates underground. Once back at the surface, the lithium can be extracted before the water is reinjected into the rock to repeat the cycle. The facility, which began operations at the end of February, is small for now. It generates around 3 megawatts of electricity — enough to power roughly 10,000 homes — and will produce about 100 tonnes of lithium per year, sufficient for batteries for around 2,000 electric vehicles. But the company plans to scale production to as much as 18,000 tonnes annually over the next decade. The Financial Times reports that the company says its lithium will compete on price with material produced in China. That will be important, since battery manufacturers tend to prioritize price over origin. Not all geothermal sites can produce lithium, but similar geothermal-lithium projects are being explored elsewhere, including at California’s Salton Sea and in Germany’s Rhine Valley. These remain in the development stages, but predictions suggest that the Salton Sea’s geothermal waters could produce as much as 600,000 tons of lithium per year.
Radioactive rhinos and tree microphones are transforming wildlife protection. Illegal wildlife trafficking is the world’s fourth most lucrative criminal enterprise, after drugs, weapons and human trafficking, according to Interpol. As MIT Technology Review reports, a growing range of high-tech tools is helping authorities crack down on the practice. One approach involves tagging rhinos with tiny amounts of radioactive isotopes. The doses are harmless to the animals but emit detectable signals, allowing customs officials to identify smuggled horns inside cargo containers or vehicles using radiation scanners. Another is low-cost, solar-powered microphones dotted throughout rainforests. These devices can identify which species are present by analyzing animal calls, as well as flagging threats such as chainsaws or gunshots in real time. The data feeds into AI systems trained to recognize patterns across vast areas that would otherwise be impossible to monitor. There are other approaches, too: Rapid DNA testing to determine almost instantly if a dung sample belongs to a protected species; AI-enabled satellite imagery that spots the telltale signs of illegal fishing fleets; handheld scanners that can detect subtle physiological differences revealing whether an animal is wild or farmed. Together, these innovations are reshaping wildlife enforcement: Interpol says such techniques helped authorities seize a record 30,000 live animals across 134 countries in 2025.
Magazine and Journal Articles Worth Your Time
Beyond AlphaFold, from The Institute for Progress
4,800 words, or about 20 minutes
Google DeepMind’s AlphaFold was so transformative in predicting protein structures that it earned its creators a Nobel Prize. How do we build more breakthroughs like it? This piece from the Institute for Progress, a think tank focused on innovation policy, argues that replicating AlphaFold’s success will require deliberate structural changes to how scientists work with AI. First: building the right datasets. AlphaFold succeeded in part because it was trained on vast, carefully curated protein databases. Many other scientific fields lack comparable datasets. Building large, uniformly formatted repositories — potentially gathered using automated laboratory systems that use machines to perform experiments or take readings — could create the raw material for similarly powerful AI models. Second: talent. The most successful scientific AI systems have been developed by researchers with deep expertise in both machine learning and the relevant scientific domain, a rare combination. Expanding that overlap through targeted training programs and interdisciplinary institutions could increase the odds of future breakthroughs. Third: more resources. Training frontier AI models is enormously expensive and access to high-performance computing remains a bottleneck for academic researchers. The essay argues that government investment in shareable computing infrastructure could expand the scope of who is able to build models. None of these measures guarantees the next AlphaFold. But if we want to leverage AI for more scientific breakthroughs, the conditions that made the AlphaFold possible will need to be built deliberately across other domains.
Saving The Life We Cannot See, from Noema
3,700 words, or about 15 minutes
Conserving plants and animals has long been central to environmentalism. Conserving microbes? Not so much. This story explores how some scientists want to change that. Microbes are easy to ignore precisely because they’re invisible, yet they produce roughly half of Earth’s oxygen, drive carbon and nutrient cycles in soils and oceans and make up the overwhelming majority of living biomass outside the plant kingdom. At the same time, they are increasingly disrupted by industrial agriculture, processed diets, pollution and climate change’s effects on glaciers, seas and permafrost. A small but growing movement is pushing for microbial conservation. The Microbiota Vault Initiative, launched in 2025, aims to preserve microbial samples in secure storage, much like seed banks do for plants. Meanwhile, the International Union for Conservation of Nature has created a Microbial Conservation Specialist Group, with ambitions to identify microbial conservation hotspots and eventually incorporate microbes into the Red List of threatened species. But the idea is controversial. Some scientists argue that what matters is not preserving specific microbial species, but preserving their functions — and that if certain microbes disappear, others will proliferate to fill their roles. Others worry that expanding conservation efforts to the microscopic realm risks diluting already stretched resources for protecting plants and animals. It is not yet clear where the balance lies, but the essay makes a compelling argument for including microbes in such discussions.
The NIMBY problem, from Works in Progress
4,100 words, our about 17 minutes
Across the US and much of Europe, local land-use rules have dramatically restricted housing supply, pushing prices higher. Reformers have tried to solve this by lobbying upward, persuading state or national governments to override local zoning restrictions. But those efforts have produced surprisingly little new housing. This essay argues that this is because people living near new developments bear costs — noise, congestion, construction disruption and changes to their neighborhood — but receive none of the financial benefits. Given that imbalance, opposition is rational. The article suggests a different approach: Allow residents to negotiate directly with developers. If locals share in the economic upside, they may be more willing to accept new housing. Versions of this idea already exist in Israel, the UK, Japan, South Korea and Taiwan. The trick is finding the right scale: involving enough residents to account for the costs of development, while keeping the group small enough that individuals can meaningfully share in the benefits. The broader point is that NIMBYism may not be a fixed attitude, but one that could shift dramatically if the rules of the game were changed.
Labor market impacts of AI, from Anthropic, and Building Pro-Worker Artificial Intelligence, from the National Bureau of Economic Research
3,000 words, or about 13 minutes, and 15,500 words, or about an hour
AI’s impact on the labor market is only just getting started. At least that’s one of the key conclusions of a new analysis from Anthropic, which compares the amount of work large language models could theoretically perform with how widely they are being used in practice. So far, adoption looks limited across most occupations, with software development and administration standing out as the major, and unsurprising, exceptions. The study also finds little clear evidence of major labor-market disruption attributable to AI, though there are early signs that hiring for entry-level roles may be slowing. The likely explanation is regulations, worker resistance and the difficulty of integrating new tools into existing workflows all create friction that slows adoption. That delay may create an opportunity. In a new paper from the National Bureau of Economic Research, economists Daron Acemoglu, David Autor and Simon Johnson outline a vision for “pro-worker AI.” Instead of replacing labor, they argue, AI systems could be designed to amplify human expertise, helping workers perform more complex tasks and acquire new skills faster. One example: aircraft technicians equipped with AI tools that guide them through sophisticated repairs or train them in emerging specialties such as commercial spaceflight maintenance. The reason such tools remain rare today, the authors suggest, is that companies often find it more profitable to automate jobs outright than to invest in technologies that make workers more productive. Changing that, they argue, would require policy intervention. The paper proposes nine ideas for that, including tax incentives for labor-augmenting technologies and deploying pro-worker AI in public-sector domains in which governments have greater influence. But it may be a race against the clock. As the gap between what AI could theoretically automate and what it actually does narrows, the window for shaping how AI affects work will narrow with it.