Imagine running a massive AI datacenter without worrying about electricity bills, water for cooling, or carbon emissions. That’s exactly what Google is exploring with Project Suncatcher, their latest moonshot that could literally take cloud computing to the clouds (or technically, just beyond them).
What Exactly Is Project Suncatcher?
Project Suncatcher is Google’s research initiative to build AI datacenters in space using networks of solar-powered satellites. Think of it as a constellation of small satellites, each equipped with Google’s Tensor Processing Units (TPUs) (the chips they designed specifically for machine learning), all talking to each other through laser beams and powered by the sun.
The concept sounds like science fiction, but Google CEO Sundar Pichai announced the project in early November 2025, and they’re planning to launch two prototype test satellites by early 2027 in partnership with satellite company Planet Labs as a learning mission.
Why Space? The Energy Problem
Here’s the thing: AI is hungry. Really hungry. For electricity, that is. By 2030, AI technology is expected to consume nearly 12 percent of national energy consumption in the United States alone. That’s putting massive strain on power grids, driving up electricity bills for regular people, and creating huge environmental headaches.
Current datacenters need two things constantly: power and cooling. They gulp down electricity and guzzle water to keep all those processors from melting. It’s not sustainable, especially as companies keep building bigger and bigger AI models.
Space solves both problems in one shot. There’s no night or winter up there. According to Google’s projections, a solar panel in the right orbit could be up to 8 times more productive than on Earth and produce power almost continuously. The sun pumps out more power than 100 trillion times humanity’s total electricity production, so there’s plenty to go around.
And cooling? In space, the design calls for radiating heat into the void using advanced radiators and thermal management systems. No massive water reservoirs needed, though managing heat in space is a significant engineering challenge.
How Would It Actually Work?
Google’s conceptual design involves deploying roughly 81 satellites in a tight formation, flying just hundreds of meters apart in what’s called a low Earth orbit (about 650 kilometers up). These satellites would work together as one big computer network in the sky.
The key innovation in their proposed design is using optical links (laser communications) instead of traditional radio waves to transfer data between satellites. Google’s lab tests have already hit 800 Gbps transmission speeds between prototype systems, with multi-terabit speeds looking feasible. That’s fast enough to potentially make the satellites work together like they’re in a regular datacenter, not floating in space. However, making these optical links work reliably in real orbital conditions remains to be proven.
The satellites would orbit in a dawn-dusk sun-synchronous orbit, which means they’d be bathed in nearly constant sunlight. Solar panels would collect energy, advanced heat pipes and radiators would manage the temperature, and those laser links would keep everything connected.
But Can The Hardware Survive Space?
One big question: won’t space radiation fry the chips? Space is brutal on electronics. Cosmic rays and solar radiation can damage sensitive components.
Google tested this by blasting their Trillium v6e TPUs with a 67 MeV proton beam in a controlled lab environment to simulate what they’d face in orbit. The results? No hard failures up to the maximum tested dose of 15 krad(Si), which is way more than they’d see in a typical five-year mission. The TPUs turned out to be surprisingly radiation-resistant in these tests, though real space conditions involve more varied challenges.
The most sensitive component was the High Bandwidth Memory (HBM), which started showing some issues after 2 krad(Si), nearly three times the expected five-year mission dose. But even those errors look manageable for running AI models.
What About The Economics?
This all sounds expensive, right? Launching stuff into space isn’t cheap. Currently, launch costs run somewhere between $1,500 to $2,900 per kilogram.
But here’s where it gets interesting. Google’s analysis projects that if SpaceX’s Starship succeeds and starts launching regularly (they’re projecting about 180 launches per year), costs could drop to less than $200 per kilogram by the mid-2030s. At those projected prices, a space datacenter could theoretically compete with Earth-based systems, though this depends heavily on future launch market developments.
Travis Beals, senior director for Paradigms of Intelligence at Google, put it this way: “If things keep going down the path where we keep having more uses for AI and we keep wanting more energy to power it, this has tremendous potential to scale.”
The Modular Approach
Instead of launching one massive structure (which would be insanely expensive and risky), Google’s going modular. Think smaller, self-contained satellites that can function independently. If one fails, the rest keep working. This approach reduces both risk and cost.
It’s similar to how modern software architecture works: lots of smaller services working together instead of one giant monolithic application. Except in this case, the services are flying around Earth at thousands of kilometers per hour.
What’s Next?
Google’s partnering with Planet Labs to launch two prototype satellites in early 2027. Each will carry four TPUs to test how everything performs in real orbital conditions. They’ll validate those optical links, test the thermal management systems, and figure out all the unknowns that only real-world testing can reveal.
“Like any moonshot, it’s going to require us to solve a lot of complex engineering challenges,” Pichai said.
If the 2027 tests go well, we could see larger constellations deployed by the mid-2030s, assuming launch costs come down as projected.
The Challenges Nobody’s Solved Yet
Let’s be real: there are massive hurdles here.
- Space junk: More satellites mean more potential collisions and debris. The orbital environment is already crowded, and adding large constellations increases collision risk.
- Astronomical interference: Ground-based telescopes already struggle with satellite constellations like Starlink. More satellites could disrupt astronomical observations even further.
- No repair shop in space: On Earth, if a server dies, you swap it out. In orbit? You need redundant provisioning because there’s no quick fix. That means building in backup systems, which adds weight and cost. This inability to service or repair hardware is a major operational challenge.
- Latency: While the satellites can talk to each other quickly with optical links, some applications might need data to travel down to Earth and back up, which introduces delays. Google’s design focuses primarily on inter-satellite communication rather than ultra-low-latency ground connections.
Is This Just Google Being Google?
Google has a long history of moonshots, some of which pan out (like Waymo, their self-driving car company) and some that don’t (remember Project Loon, using high-altitude balloons for internet? Or Project Calico, which was literally trying to solve death?).
But here’s the thing: the math actually checks out. Google’s research paper doesn’t show any obvious physics-based showstoppers. As Beals explained, “We’ve spent the past year or so trying to think through, what are all the ways this might not work? Can we prove it can’t work? And we’re still here because we haven’t seen any obvious showstoppers.”
The Bigger Picture
Google isn’t alone in thinking about space computing. Nvidia has announced plans to launch AI chips into orbit, and SpaceX CEO Elon Musk has said that SpaceX “will be doing” datacenters in space. Even Jeff Bezos predicted there will be gigawatt datacenters in space within 10+ years.
This isn’t just one company’s weird idea. It’s starting to look like a genuine shift in how we might handle the explosive growth in AI computing demand.
What This Could Mean For Earth
If Project Suncatcher works, it could fundamentally change the environmental equation for AI. Instead of building massive datacenters that strain local power grids and require enormous amounts of water for cooling, we’d be tapping into an effectively unlimited energy source that’s already up there.
It wouldn’t eliminate Earth-based datacenters (you still need local infrastructure for lots of applications), but it could take the pressure off. That means less strain on electricity grids, fewer conflicts between tech companies and local communities over power usage, and potentially a more sustainable path for AI development.
The 2027 test launches will be the first real proof of concept. Until then, Project Suncatcher remains exactly what Google calls it: a moonshot. But it’s a moonshot with some serious engineering and physics backing it up.
As AI continues to grow and demand more computing power, maybe the answer really is to stop looking down at the ground for solutions and start looking up instead.