Newsletter / Issue No. 10

Image by Ian Lyman/Midjourney.

Newsletter Archive

March 2024
navigation btn

Listen to Our Podcast

Dear Aventine Readers, 

As increasingly sophisticated AI models become embedded in aspects of our everyday lives, like web searching and image generation, the energy required to run these systems is soaring, putting pressure on already taxed power grids. There’s no turning the clock back; the race is on to make the most of these powerful new technologies. But can that be done with energy efficiency in mind? This month we decided to find out, and the answer is a conditional yes. Read on to find out how artificial intelligence could be far more energy efficient than it is, and what companies are doing about it. 

Also in this issue: Five experts weigh in on what it will take for lab-grown meat to succeed; a new AI-powered throat patch that restores people’s voices; and the gains that could be made by tearing down and rebuilding old wind farms. 

Thanks for reading! 

Danielle Mattoon 
Executive Director, Aventine

Subscribe

Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

The Big Idea

What Can We Do About AI’s Insatiable Thirst for Energy?

The headlines are startling. “A.I. Could Soon Need as Much Electricity as an Entire Country,” read one in The New York Times. The MIT Technology Review maintained that “Making an image with generative A.I. uses as much energy as charging your phone.” A recent New Yorker comment piece was titled simply: “The Obscene Energy Demands of A.I.”

Have no doubt: These articles are onto something. We’ve known for years that artificial intelligence uses substantial amounts of energy. But the widespread adoption of new generative AI systems has led to soaring energy use by the data centers powering them. The effect has been so dramatic that the use of artificial intelligence is driving up electricity use in the U.S. after a decades-long period of flat demand. This, in turn, is raising concerns about what this increased energy demand will mean for the climate. Senior figures in the industry admit that AI’s energy demands are an issue: At the World Economic Forum meeting in Davos earlier this year, OpenAI CEO Sam Altman said that the world would need a “breakthrough” in energy production, such as nuclear fusion, in order to satiate the power needs of AI in the future.

Yet the situation is more complex than the headlines suggest. Experts explained to Aventine that it’s hard to get a reliable handle on the energy demands of AI because today’s best estimates of AI energy consumption are just that: estimates based on assumptions and predictions, due to a lack of transparency among the big tech companies. They pointed out that there are already numerous ways to drive down the power consumption of AI that aren’t being used and raised concerns about the lack of incentives to encourage companies to adopt such techniques. Finally, they all agreed that regulation will be critical to putting the brakes on the sector’s energy use.

“People are just pretending this isn't an issue, that we will techno-solve our way out of the problem, that no matter what the costs of AI are, they're worth it because it will be such a valuable technology,” wrote Sasha Luccioni, an artificial intelligence researcher and climate lead at the AI community Hugging Face, in emails to Aventine. “But if it accelerates the climate crisis even further, we have to make informed choices in terms of AI training and deployment while keeping resource usage in mind.”

What drives AI power consumption anyway?

The sophistication of generative AI software like OpenAI’s GPT-4, Google’s Gemini and Anthropic’s Claude is made possible by underlying models that are exponentially more complex than earlier generations of AI, both in terms of their internal structure — measured by the number of parameters (the variables that can be adjusted to optimize a model’s responses) — and the amount of data used to train them. Some context: OpenAI’s GPT-1 model, released in 2018, had 117 million parameters and was trained on 4.5 GB of data, or about what’s in 7,000 individual books. GPT-3, released in 2020, had 175 billion parameters and used 570 GB of data. (When GPT-4 came along the company stopped disclosing these metrics, though it reportedly has 1.7 trillion parameters.)

Each of the billions or trillions of parameters in these models has to be adjusted during training; the more data a model is supplied with the more times those parameters must be adjusted, an energy-intensive process. “If you have a bigger model, it performs better, but that means more computational resources and more energy consumption in the background,” said Alex de Vries, a PhD candidate at Vrije University in Amsterdam and founder of Digiconomist, a research and consulting company and a digital-sustainability blog. (OpenAI used to reveal the computing resources its models required during training, but stopped disclosing figures for its large language models in 2020. At the time of writing, it had not responded to a request for comment on why it does not currently publish such details.) 

But energy use isn’t confined to the training stage. Energy use related to inference — the technical name for using an artificial intelligence model — is also far greater for generative AI systems than earlier ones because they’re widely used by the public. “Training is a one-time thing,” said Vijay Gadepally, a senior scientist at the Massachusetts Institute of Technology Lincoln Laboratory, who specializes in environmentally friendly computing and is also the chief technology officer of a startup called Radium that focuses on cloud computing for AI workloads. “It's the inference that is a persistent service that keeps running.” What this means is that every time a user prompts ChatGPT or Midjourney, they are making use of the trained algorithm, and if a platform has hundreds of millions of users, or if a company adds generative AI capabilities to, say, every web search it processes, the energy requirements rapidly escalate.

So how much energy do these models actually use?

While we know that AI is definitely using plenty of energy, it’s harder to know exactly how much. “Companies are not at all transparent about the energy consumption of the most popular models,” said Luccioni. “There's a lot of wild rumours but not a lot of evidence.” At the time of writing, OpenAI, Microsoft, Google, Deepmind and Anthropic had not responded to a request for comment on their sharing of energy use data. Because OpenAI and Google, the companies behind arguably the most prolific large language models deployed right now, currently keep their models locked away and do not publish details about energy consumption, researchers have been attempting to work out what such consumption might look like in several ways. 

Some researchers, including Luccioni, have turned to open-source generative AI models, through which it’s possible to directly measure performance. One study published in late 2022 and not yet peer reviewed, on which Luccioni is a lead author, looked at BLOOM, a 176-billion parameter language model akin to GPT-3. It showed that the 118-day training phase required about 433 megawatt hours of electricity, or about the same amount used by 40 U.S. homes annually. Luccioni was also part of a team that evaluated how much energy inference requires. The team looked at 88 different models and ran a task on each one 1,000 times to measure typical energy use. The demands varied, from around 0.047 kilowatt-hours to generate text to as much as 2.907 Kwh to generate images. They found that the energy required to generate a single image was about the same used to fully charge a typical cell phone. (This finding was the source of the MIT Technology Review headline.)

It’s also possible to take a top-down view. De Vries, for instance, has projected energy use by Nvidia GPUs, the chips used to carry out the majority of AI calculations (the company holds 95 percent market share in AI chips). Based on Nvidia’s projections for how many chips it will sell in the coming years, de Vries estimated that the AI industry could consume up to 134 terawatt-hours of electricity each year by 2027 — about the same amount that countries such as Argentina, the Netherlands or Sweden use annually. (This is the finding that the New York Time headline was based on.) Estimates by Boston Consulting Group, meanwhile, suggest that generative AI alone could lead to increased energy demand in the U.S. of between 15 and 70 terawatt- hours per year by 2030, or somewhere in the region of New York City’s annual energy consumption.

None of these approaches are perfect. Some are based on huge assumptions, others deal with AI models that aren’t used widely. And none of them help us understand the full energy impact of now-common AI tools like ChatGPT because we have so little insight into how much they are being used. 

Yet there is utility in the efforts. “It has started the [public] conversation,” said Gadepally.

Are there solutions?

While artificial intelligence is always going to require energy, there’s a growing field of so-called green AI that is offering some possible pathways for AI to operate with a lighter energy footprint. One recent paper, co-authored by Roberto Verdecchia, an assistant professor at the University of Florence, surveyed a number of new approaches and found that they offer potential energy savings of between 13 percent and 115 percent, with more than half reporting energy savings of at least 50 percent. 

For the most part, says Gadepally, these approaches fall into three categories: making adjustments to software so that it runs more efficiently; taking steps to ensure that the hardware is being used most effectively; and ensuring that data centers are built with sustainability in mind.

From a software perspective, two important adjustments can increase efficiency in today’s AI systems: the complexity of the models can be dialed back and the amount of data used to train them can be reduced. Right now, both complexity and data levels are increasing with little end in sight; DeepMind CEO Demis Hassabis recently told Wired there would be “no let up in the scaling” of the most complex models. But according to Verdecchia, there may be ways to dial back the power use. Verdecchia described what’s known as a data-centric approach, where “you have a curated, smaller data set of high quality” that could allow companies to “achieve the same results by training a lot less.” But such datasets typically require much more human intervention — which in turn requires great time and expense — to ensure that they’re of sufficient quality. 

Another approach, which might require a larger shift in mindset, is simply to allow models to be a little less accurate, requiring either fewer parameters or less training data. The longer an algorithm is trained, the more diminishing the performance returns, explained Verdecchia. Some studies, including one co-authored by Gadepally, have shown that by measuring the rate at which a model is learning, AI practitioners can determine the optimal time to stop the training process — one that maximizes efficiency gains and minimizes losses to accuracy. This, according to Gadepally’s research, can lead to efficiency gains of up to 90 percent. 

On the hardware side, it’s possible to play with how computational power is used to drive efficiency gains. Gadepally has been involved in a number of projects at MIT’s Lincoln Lab Supercomputing Center that have investigated the potential impact for these kinds of changes. One approach is to limit the amount of power that the chips performing computation can use. In experiments training Google’s BERT language model, Lincoln Lab researchers showed that by cutting the maximum power drawn by a chip from 250 to 150 watts, the training process used almost 13 percent less energy — the trade-off being that it took 108.5 percent of the time. (The impact was significant enough that Lincoln Labs is now implementing the approach with its AI work.) It’s also possible to think about when and where power-hungry tasks are scheduled: AI models could, for instance, be trained at night or over the winter period when conditions are colder and data centers run more efficiently. Inference tasks, meanwhile, could make use of one of several different models running less precise, more energy-efficient models when conditions mean that the task is particularly carbon-intensive — say, during a heat wave — while using the full model the rest of the time.

At the brick-and-mortar level, AI-based companies could insist that the data centers they use are built with sustainability in mind and powered by renewable energy. But, said Gadepally, “if we thought the visibility on the models was limited, the visibility on the hardware is far more limited.”

“There are a lot of solutions” proposed to save energy, said Verdecchia, “and they all work quite well.” But, he added, the biggest surprise from his review is that “Nobody is using them.” 

What are companies doing?

We don’t have an accurate understanding on how much computational resources companies are using, let alone whether the processes they deploy are optimized to save energy. Through Microsoft, OpenAI has access to huge quantities of computing power, as does Google through the data centers it operates. While it may be cost-effective for these companies to implement energy-saving approaches, the likely reality is that such approaches are a secondary concern to building the most advanced algorithms possible. “If you're not in a place where your margins are important, or you're just worried about growing revenues, I don't think your incentives necessarily line up with [energy saving],” said Gadepally.

All of the experts Aventine spoke with agreed that some form of regulation seemed almost inevitable as a means of clamping down on energy use by AI companies. But there was disagreement among Aventine’s sources on how successful any legislation might be in promoting action among the makers of AI, given the complexity of the field being regulated, the savviness of the companies that would be regulated and the time it takes for legislation to work its way through government. 

One bill has already been proposed in the U.S., endorsed by Hugging Face, Luccioni’s employer. It proposes that the National Institute of Standards and Technology develop methods to measure and report the climate impact of AI, but would only call on companies to report voluntarily. Until companies report energy use related to their development and deployment of AI, it will be impossible to accurately assess the impact of the industry, or for individuals and organizations to make informed decisions about how they use AI, given its energy requirements.

Without more rigorous oversight, it’s unclear how the status quo will change. “I just don't know what else we can do,” said Verdecchia.

Quantum Leaps

Advances That Matter

 Brain light / Alamy Stock Photo

CRIPR-edited pork is en route to your plate. Farmers may soon benefit from gene-edited pigs that are immune to a disease called porcine reproductive and respiratory syndrome (PRRS), a condition that costs the farming industry as much as $2.7 billion globally each year. The advance could make Genus, the U.K.-based company behind the work, the first large-scale producer of CRISPR-edited animals for meat production. PRRS typically kills all suckling piglets that it infects. Genus is using CRISPR to delete a small section of DNA that is responsible for allowing the virus to bind to a protein and cause infection. That edit has been known and understood for eight years, but Genus has shown in a paper published in The CRISPR Journal that it has been able to expand the technique to create PRRS-resistant pigs at “industrial scale.” The company’s first CRISPR-edited pigs were not all successful; some didn’t all have the desired edit present, some had the edit present in only some parts of their body and some had so-called off-target effects, meaning that the gene edits gave rise to unwanted changes. But by mating healthy edited animals together, the company was able to create entire breeds that have complete immunity to PRRS. New Scientist reports that the company is several years into an approval process with the FDA, which it hopes will come through before the end of 2025.

Quantum computing makes a practical leap. The quantum computing company D-Wave claims to have solved a real-world, practical problem on a quantum computer far faster than any classical supercomputer ever could. The research, published on the preprint server arXiv and not yet peer reviewed, describes how the company used quantum chips to solve equations describing how matter transitions from one state, such as a solid, to another, such as a liquid. The company claims it would take the world’s fastest supercomputer, Frontier, “millions of years” to perform the same calculation. This is a step beyond Google’s achievement in 2019 using a quantum computer to beat a classical computer, but at a task with no real practical use. If the result holds up to scrutiny, there are still some caveats. D-Wave’s computers are often criticized because their computers are not built to be general tools that could address a variety of problems, but are instead built for highly specific calculations. Additionally, the task that was performed in this research is fairly esoteric, and while the calculations bear similarities to those used in optimization problems encountered in finance and logistics, it’s not clear exactly how widely transferable the result will be. Finally, in the past, researchers have sometimes gone on to develop algorithms for classical computers that can match the achievement of quantum computers. Still, the work reinforces the notion that there are practical uses for quantum computers — and suggests commercial reality may be closer than we have previously assumed.

A flexible throat patch could restore people’s voices. Researchers from UCLA have developed a device that can convert throat movements into speech. The black patch, which measures about an inch square and is described in greater detail in Nature Communications, makes use of an effect known as magnetoelasticity. Thin layers of a flexible silicon material with magnetic nanoparticles sandwiched between them are stretched across the throat so that the throat’s movements can be captured as electrical currents and translated into speech by a machine-learning algorithm. The algorithm, which was trained on five simple phrases corresponding to signals created by the electrical currents, was able to identify those phrases when the patch was worn by new individuals with 95 percent accuracy. The big caveat here is that the approach, however impressive, has a very limited repertoire of phrases at its disposal and will require far more extensive data collection and validation before it can be widely used. Still, The Economist reports that with extra work and some technological tweaks to make it commercially manufacturable, the system could go on to restore speech in people with voice disorders without the current need for invasive procedures.

Five Ways to Think About

Lab-Grown Meat

 via Good Food

Meat’s impact on the climate is indisputable. 

The livestock industry contributes to about one-fifth of global greenhouse gas emissions and more than one-third of the world’s methane emissions. Methane contributes about 28 times more warming effect in Earth’s atmosphere than carbon dioxide, making its reduction disproportionately important for reducing global warming. 

Given that meat consumption also seems to be fairly predictable, remaining about the same or increasing a little over the last twenty years in the U.S., what can be done to address the climate impact of this dietary staple? Some researchers are maximizing the efficiency of livestock production by lowering the amount of resources needed to raise animals; others are creating food additives to reduce how much gas animals produce. Animal waste could be turned into new types of fuel with lower greenhouse gas effects than fossil fuels; perhaps animals could be gene-edited to require less food and/or space. 

But the most radical proposal does not involve rearing animals at all: It is the cultivation of meat cells in a laboratory or factory setting. Lab-grown meat (or cultivated meat, as industry insiders call it) is meat that should be, in theory, identical to animal meat, with the same protein, fat, nutrients and other characteristics, except that it’s a chemical concoction grown inside a container called a bioreactor, which is filled with nutrients and other substances that cells need to grow and differentiate. Meat is the collection of cells that make up animal tissue (often mostly muscle), regardless of whether those cells actually grew in an animal.

About a decade ago, before the lab-grown meat industry technically existed as an industry, researchers began manipulating tissue engineering techniques used by biopharmaceutical companies to grow meat, resulting in the first cultivated burger, consumed in 2016. (This is not to be confused with vegetable- and grain-based burgers, which try to imitate the texture and flavor of meat with other ingredients.) Since then, progress in the field has been slow. But today a wide range of startups is trying to create lab-grown beef, fish, pigs and even lamb that tastes and feels like animal meat, costs the same and doesn’t add so significantly to carbon and methane emissions. Two of them — Upside Foods and Eat Just’s GOOD Meat, both cultivating chicken and interviewed for this newsletter — achieved regulatory approval in June 2023 to sell lab-grown meat in the U.S. market.

At the moment, however, these products are not for sale anywhere in the U.S. Despite regulatory approval, the industry is nowhere near driving the cost of cultivated meat toward parity with meat from animals, let alone measuring whether its emissions can be lower. Aventine spoke with five experts on the industry, and while all of them have differing perspectives on how consumers will react to the meat and the technical challenges ahead, they all had one big concern in common: Will cost ever come down enough to make cultivated meat competitive with animals? No one appears confident about what the answer to the price problem may be. 

The single biggest challenge for the entirety of the industry — which didn’t exist like eight years ago — is building large-scale facilities that can make tens of millions of pounds of finished product. We need to build those kinds of facilities at significantly lower costs. Today, those kinds of facilities have price tags at north of half a billion dollars to build, and it’s not viable at that price tag. There are technical challenges with vessel design, with layout, with material costs, all of which have not been solved and have to be solved in order to do it. You can have every consumer in the world want it, every regulatory body [having] approved it, but if you can’t make a lot it doesn’t make sense.” 
— Josh Tetrick, co-founder and CEO of Eat Just, the company that makes GOOD Meat

If you want to reduce the cost, then the two greatest technical challenges are going to be cellular agriculture and production rates. Cows, pigs and chickens are remarkably efficient at producing meat because they grow well. We're still working on whether or not the top line manufacturing capability will approach the efficiency… of animals…. Then you have to go back to the drawing board and do the calculation as to whether or not you've (made) any type of environmental impact on livestock production. It's impossible to do that calculation until we understand what the scaling of the manufacturing is going to be. The industry was clearly oversold to the investors relative to the market demand. I have long advocated that the right way to do this industry is to develop the cell-based synthetic meats as medicinal foods and enter the market that way with a food product that has medicinal value.” 
— Kit Parker, professor of bioengineering at Harvard University

Three years ago the most pressing challenges were: Number one on the list was cost, number two was cell lines, and number three was scale up.… Now that’s flipped completely in my mind, where to me we’ve made huge progress on cell lines, huge progress on cost reductions, and … the area where we have [not] — and others have not — really made a lot of progress is scale up. I’m not worried about the science.… What I don't see yet is [something to fill] this gap from small scale-up operations which companies can make today to the mega scale that you’re going to need to make this field really really propelled forward. There you need large investments to build infrastructure, and it’s hard to fundraise for that.” 
— David Kaplan, chair of the department of biomedical engineering at Tufts University and the leader of the school’s cell-based meat research program

What do consumers think? If we have a consuming public right now that says, ‘We want to know where our food is coming from, we want to know where it’s produced’ — I wonder how they will react? Cultured meats by necessity have to be produced in large cultured operations; [consumers] don’t know the people [doing it]; it’s very different from the farmer raising the cow. I’m really curious to see what consumers will think.… I am really looking forward to being able to do the type of research that I do now [into meat quality and safety] in the future for cultivated meat.”
— Jennifer Martin, assistant professor in the department of epidemiology at Colorado State University

I’m a meat eater. I love meat. But I had never thought that hard about what makes meat meat, what makes humans perceive that meat is meat. A lot of it has to do with texture, with mouthfeel and with a lot of the structural elements of meat. And so one of the things that we’ve been spending a lot of time working on is: How do you deliver that kind of sensory texture in a cultivated meat form? What delivers that kind of texture on a very basic level? Is it the protein, the fat, the extracellular matrices? My sense is that the answer is… that we will need the best of cell biology and bioprocess, and the best food science and process technologies to be able to deliver that same experience.”
— Amy Chen, chief operating officer of Upside Foods

Innovation on the Ground

Technology’s Impact Around the Globe

 Chris Craggs /Alamy Stock Photos

1. Tarifa, Spain. Along the coast of southern Spain, where the Mediterranean Sea meets the Atlantic Ocean and the winds are strong, wind turbines have become a familiar sight. In fact, The Financial TImes reports, some of them have been there so long that their owners now face a conundrum: Should they keep running and maintaining these increasingly inefficient pieces of hardware, which are now difficult to find spare parts for, or should they rip them down and replace them with modern alternatives that could be almost three times as tall and generate almost nine times the power? Standing in the way of repowering, as it’s known, are high costs, lost income from the downtime required for new development, NIMBYism and legal challenges linked to environmentalism. Yet the redevelopment of these and other projects — in prime spots for wind power that were understandably snapped up as early sites — could, The Financial Times argues, play a big role in Europe's transition to net zero greenhouse gas emissions by 2050.

2. Africa. Africa's AI ecosystem severely lags those in the West and many East Asian countries. And yet it’s taking a surprisingly proactive approach to regulating the technology: MIT Technology Review reports that the African Union, made up of 55 member countries, has already prepared a draft AI policy to shape how AI is deployed and used on the continent. According to MIT Technology Review, proponents of the regulation argue that it could erect guardrails to protect African citizens from social harms resulting from AI, such as bias and misinformation, as well as establish rules to protect low-paid workers who are employed by AI companies as data labelers.Yet some African researchers who spoke with the publication raised concerns that aggressive regulation could stifle nascent efforts to build AI in Africa. The African Union will no doubt be weighing both sides, as it seeks to find a consensus on the policy by early 2025.

3. Brooklyn, New York. Picture the scene: You take a phone call late at night, hear your mother’s voice, and then a second voice tells you that she’s being held at gunpoint. You’re told you have to send the hostage-taker a large sum of money via Venmo, or they’ll pull the trigger. It was a nightmare scenario that came true for one Brooklyn couple as described in The New Yorker, but what would you do? Increasingly the correct response is to assume it is a scam. This is a new kind of deception, accomplished by using increasingly sophisticated and available artificial intelligence that enables anyone to clone the voice of another person. Such technology was originally designed to be helpful — by, say, restoring the voices of people who have lost them to disease, but it is increasingly being used by criminals to convince people that those they love are in danger. At this point it is unclear whether authentication tools or regulation can help law enforcement and the public keep up. If you’d like to learn more about what’s behind it, listen to or read the transcript of our podcast episode on misinformation

Long reads

Magazine and Journal Articles Worth Your Time

The future of AI may hinge on its ability to forget, from Digital Frontier
2,100 words, or 9 minutes

When you train an AI model, the vast swaths of data that it’s exposed to bring about nuanced changes in the inner workings of the software. One way to think about the shifts in the numerical underpinnings of the AI are as a set of memories, all contributing to the model’s ability to generate compelling text, for example. But if the model was trained using data that it shouldn't have been trained on — maybe because that data isn’t compliant with data protection regulations or it is deemed dangerous — how do you erase its impression? Currently, you can’t directly delete a memory, because the inner workings of the model are the result of the subtle interplay of all the data that was ingested by the software. The only real path forward is to delete the offending information from the dataset and retrain the model from scratch, but that is time-, money- and energy- consuming. Instead, researchers are working on new ways to train models so that they can unlearn specific memories down the line. But, as this story from the brand-new publication Digital Frontier explains, there’s currently little consensus on how it should be done.

The Scientist Using Bugs to Help Solve Murders, from Smithsonian Magazine
3,800 words, or 17 minutes

A little over a decade ago, Paola Magni was a PhD student in Turin, fresh off a stint in the U.S., where she had worked on a master’s degree in forensic entomology at Michigan State University and completed a course with the FBI on crime scene investigations involving human remains. Around that time, she had also started helping police forces to understand the cause of death of some dead animals. Then — as much by chance as anything else — one police officer, confronted with a puzzling, high-profile murder case, remembered Magni’s work from a presentation she had given on forensic aquatics. Would she be able to find subtle clues on the human body discovered on a lakefront that could help crack the case? As it turned out, her work on that murder identified plankton on the clothes and in the body of the victim, and matched them to similar plankton on the clothes of a suspect, eventually helping seal a conviction. This feature tells the story of Magni’s rise to being a world-renowned forensic entomologist, as well as the powerful impact of the science and technology she’s helped develop.

The Pentagon’s Silicon Valley problem, from Harper’s Magazine
4,800 words, or 21 minutes

Whether it’s killer drones, augmented reality sniper sights or battlefield strategies planned by artificial intelligence, there’s a recurring narrative that technology — and particularly technology originally developed in Silicon Valley — is rapidly and radically reshaping the defense industry. This story from Harper’s takes a different view, arguing that although the Pentagon has made significant investments in Silicon Valley companies, there is so far little to show for all that investment beyond a few exceptions. It remains to be seen exactly how the proliferation of advanced technology on the battlefield will play out, but the narrative in this story certainly gives a moment of pause for those who assume that tech’s impact on the future of warfare is inescapable.

logo

aventine

About UsPodcast

contact

380 Lafayette St.
New York, NY 10003
info@aventine.org

follow

sign up for updates

If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

© Aventine 2021
Privacy Policy.