Newsletter / Issue No. 65

Image by Ian Lyman/Midjourney.

Newsletter Archive

Thu 1 Apr, 2026
navigation btn

Listen to Our Podcast

Dear Aventine Readers, 

This week we're writing about a concept that I originally dismissed for being too far out there: data centers in space. The cost of getting hardware into orbit is astronomical. Keeping it running would be fraught with challenges. But! It turns out that the almost unquantifiable energy demands of data centers might justify this out-of-the-box, into-orbit solution. Yes, lots of hurdles need to be cleared, and costs still need to come way down. But it seems to be in the realm of the possible. Read on to learn why. 

Also in this issue: 

  • EV charging speed is starting to rival refueling at the gas station. 
  • Arm, the chip designer, is now making its own processors for AI. 
  • China seems to be winning the AI talent race. 
  • And what will it take to build the world’s largest data center?
  • Thanks for reading, 

    Danielle Mattoon 
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    The Big Idea

    Building Data Centers in Space Isn’t Quite as Crazy as It Sounds

    On January 30, SpaceX sent an audacious request to the Federal Communications Commission. It sought permission to launch a million satellites into orbit around Earth, with a vision of harnessing solar power to run computer hardware that trains advanced AI models in space. But it wasn’t just about data centers: It was a major part of the justification for merging SpaceX, Musk’s highly profitable space launch company, with xAI, his highly unprofitable AI lab. 

    It is tempting to dismiss the idea as a marketing stunt used to disguise extreme financial engineering ahead of the planned SpaceX IPO. Yet Musk isn’t the only person thinking this way. Blue Origin, the space company owned by Jeff Bezos, has discussed launching satellites that could host computing workloads. Google is exploring the idea through a project called Suncatcher. Eric Schmidt reportedly bought the rocket company Relativity Space to join the race. Startups are pursuing the idea, too. The vision consists of sending one or multiple satellites into orbit, with solar panels to provide power, computing systems to take on workloads and communication systems to beam data among different satellites or to Earth. This approach would allow companies to sidestep the terrestrial headaches that plague data center operators: paying for increasing amounts of electricity, connecting to power grids, securing land, obtaining environmental permits. 

    But would building data centers in space be any easier? Simply getting the hardware into orbit would be exorbitantly expensive, and keeping it running in space would be fraught with challenges. Is it achievable? And, equally important, could it ever be economically viable?

    Hardware can work up there

    Recent experiments have shown that modern hardware — at least so far — works in space. In November 2025, a startup called Starcloud launched a satellite containing an Nvidia H100 graphics processing unit into orbit. The company used the chip to train a large language model called NanoGPT and to run a version of Google’s open-source Gemma model in real time, the first known operation of a high-powered LLM in space. Earlier the same year, Lonestar, another startup, landed a small computing system more akin to a high-end personal computer on the surface of the Moon, and tested downloading and uploading data between it and Earth.

    Historically, computer chips have been vulnerable to cosmic rays that interfere with the way chips process information. For this reason, spacecraft have typically used specially designed “hardened” chips built to withstand this problem, which tend to lag behind the latest cutting-edge chips used on Earth. But the effects of radiation on today’s most advanced chips appear to be less severe. Rick Ward, the chief technology officer of OrbitsEdge, which is developing high-performance computing systems for use aboard satellites and other spacecraft, said that the effects of radiation have historically gotten more acute as the size of transistors on chips have become smaller, but that trend appears to have reversed with the most advanced chips available today. Experiments by Google as part of its Suncatcher project found that its own Trillium TPU chips, which are used for AI applications on Earth, can survive nearly three times the total radiation expected during a five-year mission before showing permanent degradation.

    But working chips are just part of the challenge. Space is extremely cold, but it is also a vacuum, which means that the only mechanism for heat dissipation is thermal radiation. Cooling computer chips will require large, deployable radiator panels — essentially big surfaces that radiate waste heat into space — that have yet to be tested at scales suitable for large numbers of GPUs. Another problem to be solved is building sufficiently fast and reliable data connections to beam the enormous datasets used to train AI between satellites and Earth. Advances on these fronts are necessary in order to deploy large data centers in space. 

    In the meantime startups are expected to roll out small-scale systems that customers can make use of in the coming months. Lonestar, for instance, is already planning to put its first commercial system in space as part of a payload on a single satellite this October. Starcloud expects its Starcloud-2 — a single satellite designed to process raw data generated by spacecraft and space stations — to be fully operational by 2027. Starcloud-1 has just one Nvidia H100 GPU aboard; Starcloud-2 is expected to contain several. For context, a single one of those GPUs draws about 700 watts of power. SpaceX envisions a network of one million satellites providing a total of one gigawatt of computing capacity, which would mean each satellite would provide 100 kilowatts of computer capacity, equivalent to about 142 H200 GPUs, making issues of power, cooling and data connections far more difficult than the single-satellite demonstrations. 

    These smaller, single-satellite systems will be in use long before Musk’s vision is even plausible. As Rick Ward put it, referring to “space data centers” is a bit like referring to “the benefits of automobiles” when you could mean “everything from a moped to a semi truck.” But scaling up from moped to semi-truck is a huge challenge. A single H100 chip, like the one aboard Starcloud proof of concept, can draw up to 700 watts of power. Musk has talked about deploying “100 gigawatts of AI compute capacity” each year — equivalent, roughly, to more than 100 million of those same chips. For context, the biggest technology companies today own on the order of hundreds of thousands of these chips each.

    The money won’t be easy 

    Perhaps the biggest challenge of deploying a data center made up of as many as a million satellites, will be economic. Once you’ve launched a satellite loaded with hardware into orbit at enormous expense, faulty chips — which turn up regularly in data centers on Earth — can’t be easily replaced, meaning companies must bake in redundancy. And at the end of a space data center’s life, you can’t simply make a service visit to swap in new chips; the satellite must be retired, treated as a disposable asset, or upgraded using dedicated robotic space servicing.

    “Space isn’t cheap,” said Christopher Stott, founder and chair of Lonestar. “But it’s cheaper than it used to be.”

    The key question is whether it can ever be cheap enough for space data centers to compete with terrestrial ones. Andrew McCalip, head of research and development at Varda Space Industries, has modeled how variables like launch costs, power generation and satellite hardware costs contribute to the price tag. His findings suggest that launching such a system is, right now, ruinously expensive. But in the coming years, depending on how successful companies are in making R&D strides, it could become a viable alternative to building such a system on Earth.

    One important factor: Launch cost. Space data centers are under consideration only because the cost of launching hardware into space has decreased considerably since SpaceX started developing reusable rockets. Through the 1990s, it cost about $10,000 to launch a kilogram of payload into space. SpaceX’s Falcon 9 brought that down to $2,600 in 2010 and its Falcon Heavy reduced it further, to $1,500 in 2018. The company predicts that its future craft, Starship, could bring it down to $100 to $200 per kilo. McCalip’s model shows that reducing launch costs from $1,500 per kilogram to $200 would roughly halve the cost of building a space data center.

    Another factor: Specific power, the term for the ratio of the power a satellite generates via solar panels to its mass. Higher specific power means a satellite is cheaper to launch for a given level of capability in space. Currently, SpaceX’s Starlink satellites have a specific power of 37 watts per kilogram, but Musk has claimed that future satellites could reach 100 watts per kilogram. Moving from 37 watts per kilogram to 100 watts per kilogram would reduce the cost of building a space data center by about a third.

    You can see where this is going. Driving down one of these numbers alone won’t make space data centers affordable. But progress across several fronts could lower costs enough so that what began as a pipe dream starts to look reasonable. If launch cost is $500 per kilogram, specific power is 37 watts per kilogram and satellite hardware costs remain at their current levels, then according to McCalip’s model, infrastructure excluding chips for a one gigawatt space data center would cost $51 billion, versus $16 billion for doing the same thing on Earth. But if launch costs fall to $200 per kilogram, specific power rises to 100 watts per kilogram, and satellite costs are halved, then the costs for doing it in space fall to $19 billion.

    This by no means suggests that all of humanity’s data centers will soon be in space. To make it even plausible requires a tremendous amount of engineering, which will take time. This likely explains why OpenAI CEO Sam Altman recently said that “orbital data centers are not something that's going to matter at scale this decade." Yet in the longer term the idea is technically plausible and financially … well, not a cost saver, but not categorically ruinous either. In other words, it might be crazy, but it’s not quite as crazy as it first sounds.

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Quantum Leaps

    Advances That Matter

    EV charging speed is starting to rival refilling at the gas station. A new charger developed by Chinese automaker BYD can charge an EV battery from roughly 10 to 70 percent in five minutes, and from 10 to 100 percent in about nine minutes. Wired reports that BYD’s new “Flash Chargers” deliver up to 1,500 kilowatts of power, more than four times the 350 kilowatts of chargers currently considered super-fast in the US. There are catches. The stated charging time currently only works for one vehicle: BYD’s Denza Z9GT, a premium EV whose battery has been engineered to handle such charging rates. And BYD does not sell cars in the US, meaning the technology will initially be confined to China and Europe, where the company is building out charging infrastructure. Still, other companies are also working on super-fast charging systems, including others that are also above 1,000 kilowatts. Elon Musk has said that similar charging could come to the Tesla Cybertruck, borrowing from technology developed for its electric semi-truck. The engineering challenge is not just building a more powerful charger, but making batteries capable of receiving energy so quickly. BYD has addressed that by redesigning battery components, including using thinner parts that reduce electrical resistance and allow charge to flow more quickly. Over time, others will do the same, making EV charging more like a routine stop to refuel.

    Arm, the chip designer, is now making its own processors for AI. Arm Holdings, the British chip designer whose technology underpins almost every smartphone on the planet, is shifting strategy: It is starting to build its own processors for AI rather than just licensing designs to others. As Tom’s Hardware reports, the new chip, called the Arm AGI CPU, is aimed at helping run AI agents. Arm’s new hardware is a CPU, not a GPU, so it’s not the direct assault on Nvidia it might seem at first. In AI systems, CPUs handle orchestration: managing memory and storage, scheduling tasks, deciding how to move data and coordinating the processors that do the heavy math of AI operations. These latter calculations still largely happen on GPUs, an area where Nvidia remains dominant. In fact, Nvidia’s own AI systems already rely heavily on Arm-designed CPUs, though Nvidia has recently said it will begin developing more of those processors itself. Arm’s pitch is that its track record of energy-efficient chip design gives it an edge, and the company claimed that its AGI CPU is “the world’s most efficient agentic CPU.” Meta will be Arm’s first major customer, with others including OpenAI, SAP, Cerebras Systems and Cloudflare expected to follow. 

    China seems to be winning the AI talent race. Historically, the US had the monopoly on AI researchers; China was always trying to catch up. That picture may be changing. According to analyses highlighted by The Economist, China is emerging as the world’s dominant producer — and retainer — of AI talent. One data point comes from authorship of papers at NeurIPS, the top AI conference. In 2019, 29 percent of researchers presenting there had begun their careers in China. By last year, that figure had risen to 51 percent. Over the same period, the share of researchers starting out in the US fell from 20 percent to 12 percent. Another analysis by the research firm Digital Science suggests China now has more active AI researchers than the US, UK and Europe combined. And many Chinese researchers are no longer leaving the country: In 2019, around a third of NeurIPS authors who completed their undergraduate education in China were still based there; now, roughly two-thirds remain. This does not mean China has overtaken the US in research quality. Researchers interviewed by The Economist still described America as the stronger research ecosystem, and noted that working conditions at Chinese firms can be grueling. But there is power in numbers. China may find it can grind out progress, even if the top of the field remains competitive.

    Long Reads

    Magazine and Journal Articles Worth Your Time

    The Institute Behind Taiwan’s Chip Dominance, from Asterisk
    3,800 words, or about 15 minutes

    In the 1970s and 1980s, “Made in Taiwan” was shorthand for cheap manufacturing. Today, the country dominates one of the most advanced industries on Earth: semiconductors. The island now produces more than 90 percent of the world’s most advanced chips. This essay traces that transformation back to Taiwan’s Industrial Technology Research Institute (ITRI), founded in 1973 with what would amount to just $16 million today. ITRI helped spawn an entire chip industry in Taiwan, spinning out 10 separate companies, including TSMC, now the global leader in semiconductor fabrication. The strategy started small and took time. Early on, ITRI partnered with RCA in the US to learn the fundamentals of semiconductor manufacturing, then applied that knowledge to producing microcircuits for electronic watches. Over time, it built a pool of domestic expertise, supported in part by policies that allowed engineers to count four years of work at ITRI as an alternative to military service. And ITRI structured its spinouts as joint ventures with foreign firms, ensuring that potential competitors had a stake in the country’s success. By the 1990s, the companies ITRI helped create had outgrown the institute, and its influence waned. But by then, Taiwan had become a semiconductor superpower. The model isn’t easily replicated. But one lesson is that industrial policy can succeed through steady capability-building, starting small and scaling over time.

    What Will It Take to Build the World’s Largest Data Center?, from IEEE Spectrum
    3,200 words, or about 13 minutes

    One of the largest planned data centers in the world is Meta’s Hyperion, announced in June 2025. The facility, under development in Louisiana, will consume as much as five gigawatts of power — as much as 4.2 million US homes — and occupy an area comparable to Manhattan. Its first phase, drawing two gigawatts, could be online as soon as 2030. As IEEE Spectrum explains, projects like this are changing how data centers are built. Until recently, facilities were optimized around costs and margins. The AI boom now sees companies racing to build at huge scale and speed. In doing so, they’re ordering custom construction materials, buying up gas turbines to generate power before grid connections are available and hiring huge temporary workforces in order to meet aggressive deadlines. One of the strangest aspects of this rush to build is that these data centers are being designed before anyone knows exactly what hardware they will contain. That is because Nvidia has not yet revealed what its top-end systems will look like by 2030. What we do know is that the centers will be power-hungry. Today’s cutting-edge AI server racks stand more than seven feet tall, weigh over 3,000 pounds and draw as much as 120 kilowatts of power — about the same as 100 US homes. Racks that Nvidia has planned for just 2027 could draw as much as one megawatt each. So while the specifics remain elusive, the scale will be unprecedented.


    How DNA in dirt is shaking up the study of human origins, from Nature
    2,300 words, or about 9 minutes

    Much of human history lies buried beneath our feet. Until recently, reconstructing it depended on finding fossils, but researchers have a new source of evidence: DNA preserved in soil. The technique of extracting and identifying soil DNA was first demonstrated in the early 2000s, but as this Nature feature explains, the process has recently become more powerful. The key advance was the development of molecular tools that can selectively identify ancient human DNA in soil samples, rather than forcing scientists to sift through huge amounts of genetic material from microbes. Some of the results are impressive. Soil DNA has helped show, for the first time, that Denisovans — an archaic human group once thought to be confined largely to Siberia — also lived far beyond that region. At another archaeological site researchers were able to link separate layers of soil to different human groups. For example, they found DNA associated with Neanderthals and a certain set of stone tools in one layer and Denisovan DNA with other tools in a different layer, effectively tying different tools to different populations. The method isn’t without controversy: In one case, DNA has suggested that woolly mammoths may have survived much later than previously thought, raising the possibility that climate change, rather than humans, led to their extinction. That conclusion has been met with skepticism from many experts, casting doubt on the accuracy of the entire approach. Nevertheless, the technique is becoming an important tool for understanding who lived where, when.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.