Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Google is funding an AI-powered satellite constellation that will spot wildfires faster

16 September 2024 at 15:00

Early next year, Google and its partners plan to launch the first in a series of satellites that together would provide close-up, frequently refreshed images of wildfires around the world, offering data that could help firefighters battle blazes more rapidly, effectively, and safely.

The online search giant’s nonprofit and research arms have collaborated with the Moore Foundation, the Environmental Defense Fund, the satellite company Muon Space, and others to deploy 52 satellites equipped with custom-developed sensors over the coming years. 

The FireSat satellites will be able to spot fires as small as 5 by 5 meters (16 by 16 feet) on any speck of the globe. Once the full constellation is in place, the system should be capable of updating those images about every 20 minutes, the group says.

Those capabilities together would mark a significant upgrade over what’s available from the satellites that currently provide data to fire agencies. Generally, they can provide either high-resolution images that aren’t updated rapidly enough to track fires closely or frequently refreshed images that are relatively low-resolution.

The Earth Fire Alliance collaboration will also leverage Google’s AI wildfire tools, which have been trained to detect early indications of wildfires and track their progression, to draw additional insights from the data.

The images and analysis will be provided free to fire agencies around the world, helping to improve understanding of where fires are, where they’re moving, and how hot they’re burning. The information could help agencies stamp out small fires before they turn into raging infernos, place limited firefighting resources where they’ll do the most good, and evacuate people along the safest paths.

“In the satellite image of the Earth, a lot of things can be mistaken for a fire: a glint, a hot roof, smoke from another fire,” says Chris Van Arsdale, climate and energy research lead at Google Research and chairman of the Earth Fire Alliance. “Detecting fires becomes a game of looking for needles in a world of haystacks. Solving this will enable first responders to act quickly and precisely when a fire is detected.”

Some details of FireSat were unveiled earlier this year. But the organizations involved will announce additional information about their plans today, including the news that Google.org, the company’s charitable arm, has provided $13 million to the program and that the inaugural launch is scheduled to occur next year. 

Reducing the fog of war

The news comes as large fires rage across millions of acres in the western US, putting people and property at risk. The blazes include the Line Fire in Southern California, the Shoe Fly Fire in central Oregon, and the Davis Fire south of Reno, Nevada.

Wildfires have become more frequent, extreme, and dangerous in recent decades. That, in part, is a consequence of climate change: Rising temperatures suck the moisture from trees, shrubs, and grasses. But fires increasingly contribute to global warming as well. A recent study found that the fires that scorched millions of acres across Canada last year pumped out 3 billion tons of carbon dioxide, four times the annual pollution produced by the airline industry.

treeline with raging fire and sky botted out with smoke
GOOGLE

Humans have also increased fire risk by suppressing natural fires for decades, which has allowed fuel to build up in forests and grasslands, and by constructing communities on the edge of wilderness boundaries without appropriate rules, materials, and safeguards

Observers say that FireSat could play an important role in combating fires, both by enabling fire agencies to extinguish small ones before they grow into large ones and by informing effective strategies for battling them once they’re crossed that point.

“What these satellites will do is reduce the fog of war,” says Michael Wara, director of the climate and energy policy program at Stanford University’s Woods Institute for the Environment, who is focused on fire policy issues. “Like when a situation is really dynamic and very dangerous for firefighters and they’re trying to make decisions very quickly about whether to move in to defend structures or try to evacuate people.” 

(Wara serves on the advisory board of the Moore Foundation’s Wildfire Resilience Initiative.)

Some areas, like California, already have greater visibility into the current state of fires or early signs of outbreaks, thanks to technology like Department of Defense satellites, remote camera networks, and planes, helicopters, and drones. But FireSat will be especially helpful for “countries that have less-well-resourced wildland fighting capability,” Wara adds.

Better images, more data, and AI will not be able to fully counter the increased fire dangers. Wara and other fire experts argue that regions need to use prescribed burns and other efforts to more aggressively reduce the buildup of fuel, rethink where and how we build communities in fire-prone areas, and do more to fund and support the work of firefighters on the ground. 

Sounding an earlier alarm for fires will only help reduce dangers when regions have, or develop, the added firefighting resources needed to combat the most dangerous ones quickly and effectively. Communities will also need to put in place better policies to determine what types of fires should be left to burn, and under what conditions.

‘A game changer’

Kate Dargan Marquis, a senior wildfire advisor to the Moore Foundation who previously served as state fire marshal for California, says she can “personally attest” to the difference that such tools will make to firefighters in the field.

“It is a game changer, especially as wildfires are becoming more extreme, more frequent, and more dangerous for everyone,” she says. “Information like this will make a lifesaving difference for firefighters and communities around the globe.”

Kate Dargan Marquis, senior advisor, Moore Foundation.
GOOGLE

Google Research developed the sensors for the satellite and tested them as well as the company’s AI fire detection models by conducting flights over controlled burns in California. Google intends to work with Earth Fire Alliance “to ensure AI can help make this data as useful as possible, and also that wildfire information is shared as widely as possible,” the company said.

Google’s Van Arsdale says that providing visual images of every incident around the world from start to finish will be enormously valuable to scientists studying wildfires and climate change. 

“We can combine this data with Google’s existing models of the Earth to help advance our understanding of fire behavior and fire dynamics across all of Earth’s ecosystems,” he says. “All this together really has the potential to help mitigate the environmental and social impact of fire while also improving people’s health and safety.”

Specifically, it could improve assessments of fire risk, as well as our understanding of the most effective means of preventing or slowing the spread of fires. For instance, it could help communities determine where it would be most cost-effective to remove trees and underbrush. 

Figuring out the best ways to conduct such interventions is another key goal of the program, given their high cost and the limited funds available for managing wildlands, says Genny Biggs, the program director for the Moore Foundation’s Wildfire Resilience Initiative.

The launch

The idea for FireSat grew out of a series of meetings that began with a 2019 workshop hosted by the Moore Foundation, which provided the first philanthropic funding for the program. 

The first satellite, scheduled to be launched aboard a SpaceX rocket early next year, will be fully functional aside from some data transmission features. The goals of the “protoflight” mission include testing the onboard systems and the data they send back. The Earth Fire Alliance will work with a handful of early-adopter agencies to prepare for the next phases. 

The group intends to launch three fully operational satellites in 2026, with additional deployments in the years that follow. Muon Space will build and operate the satellites. 

Agencies around the world should be able to receive hourly wildfire updates once about half of the constellation is operational, says Brian Collins, executive director of the Earth Fire Alliance. It hopes to launch all 52 satellites by around the end of this decade.

Each satellite is designed to last about five years, so the organization will eventually need to deploy 10 more each year to maintain the constellation.

The Earth Fire Alliance has secured about two-thirds of the funding it needs for the first phase of the program, which includes the first four launches. The organization will need to raise additional money from government agencies, international organizations, philanthropies, and other groups  to deploy, maintain, and operate the full constellation. It estimates the total cost will exceed $400 million, which Collins notes “is 1/1000th of the economic losses due to extreme wildfires annually in the US alone.”

Asked if commercial uses of the data could also support the program, including potentially military ones, Collins said in an email: “Adjacent applications range from land use management and agriculture to risk management and industrial impact and mitigation.” 

“At the same time, we know that as large agencies and government agencies adopt FireSat data to support a broad public safety mandate, they may develop all-hazard, emergenc[y] management, and security related uses of data,” he added. “As long as opportunities are in balance with our charter to advance a global approach to wildfire and climate resilience, we welcome new ideas and applications of our data.”

‘Living with fire’

A wide variety of startups have emerged in recent years promising to use technology to reduce the frequency and severity of wildfires—for example, by installing cameras and sensors in forests and grasslands, developing robots to carry out controlled burns, deploying autonomous helicopters that can drop suppressant, and harnessing AI to predict wildfire behavior and inform forest and fire management strategies

So far, even with all these new tools, it’s still been difficult for communities to keep pace with the rising dangers.

Dargan Marquis—who founded her own wildfire software company, Intterra—says she is confident the incidence of disastrous fires can be meaningfully reduced with programs like FireSat, along with other improved technologies and policies. But she says it’s likely to take decades to catch up with the growing risks, as the world continues warming up.

“We’re going to struggle in places like California, these Mediterranean climates around the world, while our technology and our capabilities and our inventions, etc., catch up with that level of the problem,” she says. 

“We can turn that corner,” she adds. “If we work together on a comprehensive strategy with the right data and a convincing plan over the next 50 years, I do think that by the end of the century, we absolutely can be living with fire.”

Andrew Ng’s new model lets you play around with solar geoengineering to see what would happen

23 August 2024 at 11:00

AI pioneer Andrew Ng has released a simple online tool that allows anyone to tinker with the dials of a solar geoengineering model, exploring what might happen if nations attempt to counteract climate change by spraying reflective particles into the atmosphere.

The concept of solar geoengineering was born from the realization that the planet has cooled in the months following massive volcanic eruptions, including one that occurred in 1991, when Mt. Pinatubo blasted some 20 million tons of sulfur dioxide into the stratosphere. But critics fear that deliberately releasing such materials could harm certain regions of the world, discourage efforts to cut greenhouse-gas emissions, or spark conflicts between nations, among other counterproductive consequences.

The goal of Ng’s emulator, called Planet Parasol, is to invite more people to think about solar geoengineering, explore the potential trade-offs involved in such interventions, and use the results to discuss and debate our options for climate action. The tool, developed in partnership with researchers at Cornell, the University of California, San Diego, and other institutions, also highlights how AI could help advance our understanding of solar geoengineering. 

The current version is bare-bones. It allows users to select different emissions scenarios and various quantities of particles that would be released each year, from 25% of a Pinatubo eruption to 125%. 

Planet Parasol then displays a pair of diverging lines that represent warming levels globally through 2100. One shows the steady rise in temperatures that would occur without solar geoengineering, and the other indicates how much warming could be reduced under your selected scenario. The model can also highlight regional temperature differences on heat maps.

You can also scribble your own rising, falling, or squiggling line representing different levels of intervention across the decades to see what might happen as reflective aerosols are released.

I tried to simulate what’s known as the “termination shock” scenario, exploring how much temperatures would rise if, for some reason, the world had to suddenly halt or cut back on solar geoengineering after using it at high levels. The sudden surge of warming that could occur afterward is often cited as a risk of geoengineering. The model projects that global temperatures would quickly rise over the following years, though they might take several decades to fully rebound to the curve they would have been on if the nations in this simulation hadn’t conducted such an intervention in the first place. 

To be clear, this is an exaggerated scenario, in which I maxed out the warming and the geoengineering. No one is proposing anything like this. I was playing around to see what would happen because, well, that’s what an emulator lets you do.

You can give it a try yourself here

Emulators are effectively stripped-down climate models. They’re not as precise, since they don’t simulate as many of the planet’s complex, interconnected processes. But they don’t require nearly as much time and computing power to run.

International negotiators and policymakers often use climate emulators, like En-ROADS, to get a quick, rough sense of the impact that potential rules or commitments on greenhouse-gas emissions could have. 

The Parasol team wanted to develop a similar tool specifically to allow people to evaluate the potential effects of various solar geoengineering scenarios, says Daniele Visioni, a climate scientist focused on solar geoengineering at Cornell, who contributed to Planet Parasol (as well as an earlier emulator).

Climate models are steadily becoming more powerful, simulating more Earth system processes at higher resolutions, and spitting out more and more information as they do. AI is well suited to help draw meaning and understanding from that data. It’s getting ever better at spotting patterns within huge data sets and predicting outcomes based on them.

Ng’s machine-learning group at Stanford has applied AI to a growing list of climate-related subjects. Among other projects, it has developed tools to identify sources of methane emissions, recognize the drivers of deforestation, and forecast the availability of solar energy. Ng also helps oversee the AI for Climate Change bootcamp at the university.

But he says he’s been spending more and more of his time exploring the potential of solar geoengineering (sometimes referred to as solar radiation management, or SRM), given the threat of climate change and the role that AI can play in advancing the research field. 

There are “many things one can do—and that society broadly should work on—to help address climate change, first and foremost decarbonization,” he wrote in an email. “And SRM is where I’m focusing most of my climate-related efforts right now, given that this is one of the places where engineers and researchers can make a big difference (in addition to decarbonization).”

In a 2022 piece, Ng noted that AI could play several important roles in geoengineering research, including “autonomously piloting high-altitude drones” that would disperse reflective particles, modeling effects of geoengineering across specific regions, and optimizing techniques. 

Planet Parasol itself is built on top of another climate emulator, developed by researchers at the University of Leeds and the University of Oxford, that relies on the rules of physics to project global average temperatures under various scenarios. Ng’s team then harnessed machine learning to estimate the local cooling effects that could result from varying levels of solar geoengineering, says Jeremy Irvin, a grad student in his research group at Stanford.

One of the clearest limits of the current version of the tool, however, is that the results look dazzling. In the scenarios I tested, solar geoengineering cleanly cuts off the predicted rise in temperatures over the coming decades, which it may well do. 

That might lead the casual user of such a tool to conclude: Cool, let’s do it!

But even if solar geoengineering does help the world on average, it could still have negative effects, such as harming the protective ozone layer, disturbing regional rainfall patterns, undermining agriculture productivity, and changing the distribution of infectious diseases. 

None of that is incorporated in the results as yet. Plus, a climate emulator isn’t equipped to address deeply complex societal concerns. For instance, does researching such possibilities ease pressure to address the root causes of climate change? Can a tool that works at the scale of the planet ever be managed in a globally equitable way? Planet Parasol won’t be able to answer either of those questions.

Holly Buck, an environmental social scientist at the University at Buffalo and author of After Geoengineering, questioned the broader value of such a tool along similar lines.

In focus groups that she has conducted on the topic of solar geoengineering, she’s found that people easily grok the concept that it can curb warming, even without seeing the results plotted out in a model.

“They want to hear about what can go wrong, the impact on precipitation and extreme weather, who will control it, what it means existentially to fail to deal with the root of the problem, and so on,” she said in an email. “So it is hard to imagine who would actually use this and how.”

Visioni explained that the group did make a point of highlighting major challenges and concerns at the top of the page. He added that they intend to improve the tool over time in ways that will provide a fuller sense of the uncertainties, trade-offs, and regional impacts.

“This is hard, and I struggled a lot with your same observation,” Visioni wrote in an email. “But at the same time … I came to the conclusion it’s worth putting something down and work[ing] to improve it with user feedback, rather than wait until we have the perfect, nuanced version.”

As to the value of the tool, Irvin added that seeing the temperature reduction laid out clearly can make a “stronger, lasting impression.” 

“We are calling for more research to push the science forward about other areas of concern prior to potential implementation, and we hope the tool helps people understand the capabilities of SAI and support future research on it,” he said.

From Meta CTO to climate tech investor: Mike Schroepfer on his big pivot

29 July 2024 at 11:00

As the pandemic locked down cities in early 2020, Mike Schroepfer, then the chief technology officer of Meta, found himself with more free time than he’d ever had in his career. 

In quiet moments that would have been filled with work travel, social events, or his children’s school activities, he reflected on how well humanity can pull together in the face of an acute crisis—implementing public health measures, mass-producing tests, and turbocharging the development of vaccines. 

But the experience also reinforced his view that we are particularly bad at addressing slow-motion catastrophes like climate change, where the risks are grave and growing but mostly looming in the distance. 

As he learned more about global warming, Schroepfer came to believe he had a role to play: By leveraging his technical expertise and financial resources, he could accelerate essential research and help society develop the understanding and tools we may need to avoid or prepare for the escalating dangers.

As the threat of climate change consumed more and more of his time, he decided in 2021 to step down from his CTO role and dedicate himself to addressing the challenge through both philanthropic and for-profit efforts. (He remains a senior fellow at Meta.)

I’m willing to take a lot of risks that these things just don’t work and that people make fun of me for wasting my money, and I’m willing to stick it out and keep trying. 

Mike Schroepfer

In May 2023, he announced Gigascale Capital, a venture fund backing early-stage climate tech companies, including startups working to commercialize fusion, cut landfill emissions, and reduce methane pollution from cattle. That summer, he also launched Carbon to Sea, a $50 million nonprofit effort to accelerate research on ocean alkalinity enhancement (OAE), a means of drawing down more planet-warming carbon dioxide into the oceans by adding substances like olivine, basalt, or lime.

This year, as MIT Technology Review first reported, he launched Outlier Projects, which is donating grants to research groups working in three areas: removing greenhouse gas from the air, preventing glaciers from collapsing, and exploring the contentious idea of solar geoengineering, a catch-all term for a variety of ways that we might be able to cool the planet by casting more heat back into space.

Last week, Schroepfer sat down with MIT Technology Review in his offices at Gigascale Capital, in downtown Palo Alto, California, to discuss his approach to the problem, why he’s willing to spend money on controversial climate interventions, and what AI and the presidential election could mean for progress on clean energy.

This interview has been edited for length and clarity.


Is there a unifying philosophy across your climate efforts? 

The foundation is that when you get a set of people and you get them all pointed in the same direction, and they wake up every morning and say “We’re going to go solve this problem and nothing else matters,” it’s often surprising what they can get done. 

I think the other unifying theme, which also unifies my career, is: Technology is the only thing I have seen that removes constraints. 

I just saw this again and again and again at Meta, where we would reduce cost, improve efficiency, develop a new technology, and then a thing that was a hard constraint before just got removed. 

Through the proper development and deployment of technology, we can remove either-or decisions and move to the world I want to move to, which is a yes-and decision. 

How do we bring the standard of living of 8 billion people up to those of the West and have a planet that my children can live on? That’s really the question, and the only answer I can see is technology.

There are a variety of potential approaches to ocean carbon removal—everything from sinking kelp, which doesn’t seem to be working that well, to iron fertilization and other things. So why enhanced ocean alkalinity? Why was that the one where you said, let’s dive deep?

In reading about all the different approaches, it stood out as the most likely, the most scalable, the most cost effective, and the most permanent, yet the least well understood.

And so it was super high impact if it works, but we need to know more. 

I had no prior bias to this. I like kelp. I like all these things. I’m not a one-solution sort of person. I want as many things to work as possible. 

As an engineer, my reading of technological deployment is that the relatively elegant, simple solutions end up being the ones that scale. And OAE is about as simple as it gets. 

Let’s switch gears to a touchy topic: solar geoengineering. Why did you decide that was an important area where you wanted to support research

We did a broad search for problems that are defined as high impact, high scientific uncertainty. Those are the ones that I think fit what we’re comfortable with and good at. And as we did that search, the two—besides carbon removal—that came out were solar radiation management (SRM) and glacier stabilization.

SRM felt like an orthogonal solution because it is a way to make rapid cooling if we need to—if this becomes a humanitarian crisis.

We’re already losing lives due to heat, but it’s going to get to the point where people aren’t going to tolerate it, and the question is: What do you do at that point? 

Humans are good in a crisis, but it felt like, hey, we ought to get started now. To really start doing the rigorous work to understand “Does this work? Is it effective? What are the safety concerns?” while we’re not in a crisis moment, so that we’re prepared.

You mentioned glacier restoration as well. Why was that a problem you wanted to contribute to?

Assume we solve every other problem. We remove all the carbon, we electrify everything. We’ve still got a sea-level-rise problem, mostly because of glaciers that are moving. 

One of the approaches is to simply pump water out of the bottom of the glacier to remove the lubrication layer that’s causing them to move. We have glaciers with boreholes already in them that are highly instrumented, and they’re already moving. So dropping a pump in there and pumping out water is a very, very, very low-risk activity that starts to answer some basic questions, like: Does this work at all? Would it be feasible? Would it be overwhelmingly impossible because of energy or cost needs?

Whatever approach you take to it, we’re talking about a massive infrastructure project that’s just gonna be incredibly costly. On the other hand, if the Thwaites Glacier (sometimes called the Doomsday Glacier) does slide into the sea, then every city around the world, plus every low-lying nation, has to do these massive infrastructure projects.

Can we pull together as a global society to address this thing in the most efficient way, or are we just going to leave everyone to deal with it on their own? 

This is where I think people underweight the power of the prototype or the power of the proof of concept.

We can talk theoretically. I can bring scientists over and they can say, “I’ve got a big spreadsheet which explains to you how expensive this is going to be.”

I don’t know. Maybe they’re right. Maybe they’re not. Instead, let’s get on a plane. And let me show you. It was moving this fast. We did this. It’s now moving this fast. Here’s the pump. We’re pumping water out. 

glacier near Brown Station
The Thwaites Glacier.
KARI SCAMBOS/NSIDC

I think a lot of what my role in the world is to do is to get us to there. I’m willing to take a lot of risks that these things just don’t work and that people make fun of me for wasting my money, and I’m willing to stick it out and keep trying. 

What I hope I do is put a bunch of proof points on the board, so that when the time comes that we need to start making decisions about these things, we’re not starting from scratch—we’re starting from a running start.

And you think that just having a greater amount of certainty and clarity—in terms of what the risks are, and how viable these solutions are, and what they will cost, and how we do it—can change the dynamics …

I think it does.

… where suddenly you could see nations pulling together in a way where it’s hard to imagine when there’s so much uncertainty? 

Yeah. Or it goes the other way, where you decide, “Hey, we’ve had all these crazy ideas, and none of them are going to work, so we got to do something else.” 

But as you say, the alternatives are moving lots of people or building big seawalls, and those are going to get pretty overwhelming pretty quickly. 

My career has been putting tools in the toolbox. My job was to stock that toolbox such that when we needed it, we were ready to go. And I’m applying that same approach here, which is just like, “Hey, what are the things that I can help push forward in some way so that if we need them, or if we need to understand them, we’re a lot further along than we are today?” Right? 

We’ve mostly talked about your philanthropic efforts so far, but you also set up Gigascale Capital, a venture fund. How does your investment strategy and approach differ from that of a traditional tech venture firm? For instance, are you investing over longer time horizons than the standard five to 10 years? 

We’re here to prove that if you pick the right climate tech companies with the right founders, that can be an amazing business. They’re disrupting trillion-dollar industries, and so you ought to be able to get good returns on that. And that’s what’s going to be required to get a bunch of people to open up their checkbooks and really spend the trillions of dollars we need a year to solve these problems.

So we look for companies with—we’ve jokingly called it at times the “green discount.”

Those trends are freight trains that are going down the hill and are pretty hard to stop.

Mike Schroepfer

Like, “Hey, this is a better product. [whispers] By the way, it’s better for the environment.” Sort of the little asterisk if you read the fine print at the bottom. 

The starting point is, the consumer wants it because it provides a lot of benefits; enterprise wants it because it’s cheaper. That is the selling point of all the products we back. And then it also happens to be a lot lower carbon, or zero carbon, compared to whatever alternative it’s displacing.

Your mentioning the green discount reminds me of Bill Gates’s green premium (the Microsoft cofounder’s thesis that it takes heavy investments in climate tech to reduce their cost premium relative to polluting products over time). There are some products, like green steel and green cement, where the alternatives are more expensive. Does that mean that you’re not investing in those areas, or is it just that you would with the hope that eventually they’ll be able to get those costs down?

Technology takes time to incubate, so no new technology out of the gate is better, faster, cheaper. But in the life cycle of the company, in five to 10 years—I have to believe, at scale, you can be cost competitive or have a cost advantage versus the alternatives. So that means that, yeah, we only invest in things that we think can either be cost competitive or have some other co-benefit that is a decision maker.

This is why I very cleanly separated philanthropic work where it’s like, “I get nothing out of this—we’re gonna send money away and hope public good, papers, knowledge gets created.” 

And the venture fund is “Nope, this is the capitalistic endeavor to prove to people that if you smartly choose the right solutions, you can make money and fund the low-carbon economy.” That is the bet we’re making.

Given your recent job leading tech and AI efforts at Meta, I’m curious about your thinking about the potential tension between AI energy consumption that’s very much in the news right now and clean energy and climate goals. What do you think companies will need to do to stay on track with their own climate commitments as data centers’ energy demands rise?

Two thoughts on this.

AI is a foundational technology that can enable a lot of benefits for us moving forward. Part of why I still have an affiliation with Meta is because a lot of the work I do there is on Llama, our open-source model, which is allowing that technology to be used by lots of different people in the industry. 

I think foundational technology being open is one of the ways in which humanity moves forward faster and gets more people into prosperity, which is what I care about. 

In terms of energy consumption, I start with let’s get AI as fast as we can, because I think it is good. 

In my time at Meta, we many, many times had multiple-orders-of-magnitude improvements in efficiency or power use. 

So I think the industry right now is trying to build the best thing they can, and that consumes a lot of power and energy. I think if we get to a point where that’s a huge problem and we need to really optimize it from an efficiency standpoint, there are a lot of levers to pull there.

Schroepfer also spoke with MIT Technology Review’s James Temple about his climate philanthropy and investments during the ClimateTech conference last year. You can now watch the full interview above.

And AI or no AI, if you want to electrify everything and remove all fossil fuels, we just have a tremendous amount of clean energy we need to bring on the grid, right? That problem exists whether you have AI or not. So I think it’s a little bit of an over-highlighted sideshow to the real game, which is: How do we get tens of gigawatts of clean energy onto the grid as fast as possible every year? How do we get more solar, more wind, more storage? Can we bring fusion online?

To me, these are the humanitarian game-changers; it is the sort of unlock for a lot of other things.

I hate to get political here, but in light of these recent Supreme Court decisions about federal agency powers, I am curious what you think a Trump win in November might mean for climate and clean energy progress.

The short answer is, I’m not sure. 

Okay, then maybe it’s the same answer to my next question, which is: What do you think it might mean for financial opportunities in the sector, to the degree that Trump has said he would try to roll back Inflation Reduction Act incentives for EVs and other things? Do you think it could weaken the case for private investment into some of these areas?

This goes back to when you asked, What do we believe? What do we invest in? 

Basically, it has to start with the business case: My product is better or cheaper. I think that investment case is durable regardless. I think these things like the IRA can accelerate things and make things easier, but if you remove them, I don’t think that eliminates the fundamental advantages some of these technologies have. 

The exciting thing about this world is that an electric powertrain on a vehicle is fundamentally much more efficient than a gas power train—like 3 to 4X more efficient. So I should be able to build a product that is very cost advantaged to these petrol-burning things. There’s a bunch of issues with the scale and customer adoption and things like that, but the fundamentals are in my favor. 

And I think we see this trend happening in a lot of things. Solar is the cheapest form of energy generation we’ve ever had, and that’s going to continue as we massively increase manufacturing capacity. Batteries have gone down an unbelievable cost curve. And each year, we’re making more batteries than we’ve ever made before.

One of my favorite things is Wright’s Law: this idea that as you double the scale of your production, you generally see a decrease in cost. It varies from product to product, but for batteries, it’s about 20% or so every time we double the production.

If my product gets cheaper by about 5% to 10% a year, at some point I’m gonna win. Those trends are freight trains that are going down the hill and are pretty hard to stop.

Beer, hydrogen, and heat: Why the US is still trying to make mirror-magnified solar energy work

25 July 2024 at 17:00

The US is continuing its decades-long effort to commercialize a technology that converts sunlight into heat, funding a series of new projects using that energy to brew beer, produce low-carbon fuels, or keep grids running.

On July 25, the Department of Energy will announce it is putting $33 million into nine pilot or demonstration projects based on concentrating solar thermal power, MIT Technology Review can report exclusively. The technology uses large arrays of mirrors to concentrate sunlight onto a receiver, where it’s used to heat up molten salt, ceramic particles, or other materials that can store that energy for extended periods. 

“Under the Biden-Harris administration, DOE continues to invest in the next-generation solar technologies we need to tackle the climate crisis and ensure American scientific innovation remains the envy of the world,” Energy Secretary Jennifer Granholm said in a statement.

The DOE has been funding efforts to get concentrated solar energy off the ground since at least the 1970s. The idea was initially driven in part by the quest to develop more renewable, domestic sources of energy during the oil crisis of that era. 

But early commercial efforts to produce clean electricity based on this technology have been bedeviled by high costs, low output, and other challenges. 

Researchers continued to try to drive the field forward, in part by moving to higher-temperature systems that are more efficient and switching to new types of materials that can withstand them. The focus of the concentrating solar field has also shifted away from using the technology to produce electricity—a job that its solar photovoltaic cousin now does incredibly effectively, cheaply, and on a massive scale—and toward using it to provide the heat needed for various industrial processes or as a form of very long-duration energy storage for grids. 

Indeed, a core promise of the technology is that heat can be stored more efficiently than electricity, potentially offering an alternative to very expensive large-scale battery plants. This could be especially useful for dealing with prolonged dips in renewable generation as solar, wind, and other fluctuating sources come to produce a larger and larger share of electricity.

Among the awardees:

  • More than $7 million of the DOE funds will support a project at Firestone Walker Brewery in Paso Robles, California, which will tap into solar thermal energy to produce the steam needed for its lineup of IPAs and other beers.
  • Another $6 million will go to Premier Resource Management’s planned concentrating solar power plant in Bakersfield, California, which would store thermal energy in retired fracking sites.
  • Researchers at West Virginia University, who are working with NASA, secured $5 million to explore the use of solar thermal to produce a clean form of hydrogen, a fuel as well as a feedstock in the production of fertilizer, steel, and other industrial goods.

The DOE funds pilot and demonstration projects in the hopes of kick-starting commercialization of emerging energy technologies, helping research groups or companies to refine them, scale them up, and drive down costs.

In the case of concentrating solar thermal, costs still need to fall by about half  to “really unlock broader applications,” says Becca Jones-Albertus, director of DOE’s Solar Energy Technologies Office.

But she says the department continues to invest in the development of the technology because it remains one of the most promising ways to address three big areas where the world still needs better solutions to cut climate-warming emissions: long-duration grid storage, industrial heat, and steady forms of carbon-free electricity.

Google, Amazon and the problem with Big Tech’s climate claims

17 July 2024 at 11:00

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Last week, Amazon trumpeted that it had purchased enough clean electricity to cover the energy demands of all the offices, data centers, grocery stores, and warehouses across its global operations, seven years ahead of its sustainability target. 

That news closely followed Google’s acknowledgment that the soaring energy demands of its AI operations helped ratchet up its corporate emissions by 13% last year—and that it had backed away from claims that it was already carbon neutral.

If you were to take the announcements at face value, you’d be forgiven for believing that Google is stumbling while Amazon is speeding ahead in the race to clean up climate pollution. 

But while both companies are coming up short in their own ways, Google’s approach to driving down greenhouse-gas emissions is now arguably more defensible. 

In fact, there’s a growing consensus that how a company gets to net zero is more important than how fast it does so. And a new school of thought is emerging that moves beyond the net-zero model of corporate climate action, arguing that companies should focus on achieving broader climate impacts rather than trying to balance out every ton of carbon dioxide they emit. 

But to understand why, let’s first examine how the two tech giants’ approaches stack up, and where company climate strategies often go wrong.

Perverse incentives

The core problem is that the costs and complexity of net-zero emissions plans, which require companies to cut or cancel out every ton of climate pollution across their supply chains, can create perverse incentives. Corporate sustainability officers often end up pursuing the quickest, cheapest ways of cleaning up a company’s pollution on paper, rather than the most reliable ways of reducing its emissions in the real world. 

That may mean buying inexpensive carbon credits to offset ongoing pollution from their direct operations or that of their suppliers, rather than undertaking the tougher task of slashing those emissions at the source. Those programs can involve paying other parties to plant trees, restore coastal ecosystems, or alter agriculture practices in ways that purport to reduce emissions or pull carbon dioxide out of the air. The snag is, numerous studies and investigative stories have shown that such efforts often overstate the climate benefits, sometimes wildly.  

Net-zero goals can also compel companies to buy what are known as renewable energy credits (RECs), which ostensibly support additional generation of renewable electricity but raise similar concerns that the climate gains are overstated.

The argument for RECs is that companies often can’t purchase a pure stream of clean electricity to power their operations, since grid operators rely on a mix of natural gas, coal, solar, wind, and other sources. But if those businesses provide money or an indication of demand that spurs developers to build new renewables projects and generate more clean electricity than they would have otherwise, the companies can then claim this cancels out ongoing pollution from the electricity they use.

Experts, however, are less and less convinced of the value of RECs at this stage.

The claim that clean-energy projects wouldn’t have been built without that added support is increasingly unconvincing in a world where those facilities can easily compete in the marketplace on their own, Emily Grubert, an associate professor at Notre Dame, previously told me. And if a company’s purchase of such credits doesn’t bring about changes that reduce the emissions in the atmosphere, it can’t balance out the company’s ongoing pollution. 

‘Creative accounting’

For its part, Amazon is relying on both carbon credits and RECs. 

In its sustainability report, the company says that it reached its clean-electricity targets and drove down emissions by improving energy efficiency, buying more carbon-free power, building renewables projects at its facilities, and supporting such projects around the world. It did this in part by “purchasing additional environmental attributes (such as renewable energy credits) to signal our support for renewable energy in the grids where we operate, in line with the expected generation of the projects we have contracted.”

But there’s yet another issue that can arise when a company pays for clean power that it’s not directly consuming, whether through RECs or through power purchase agreements made before a project is built: Merely paying for renewable electricity generation that occurred at some point, somewhere in the world, isn’t the same as procuring the amount of electricity that the company consumed in the specific places and times that it did so. As you may have heard, the sun stops shining and the wind stops blowing, even as Amazon workers and operations keep grinding around the world and around the clock. 

Paying a solar-farm operator some additional money for producing electricity it was already going to generate in the middle of the day doesn’t in any meaningful way reverse the emissions that an Amazon fulfillment center or server farm produces by, say, drawing electricity from a natural-gas power plant two states away in the middle of the night. 

“The reality on the ground is that its data centers are driving up demand for fossil fuels,” argued a report last week from Amazon Employees for Climate Justice, a group of workers that has been pushing the company to take more aggressive action on climate change. 

The organization said that a significant share of Amazon’s RECs aren’t driving development of new projects. It also stressed that those payments and projects often aren’t generating electricity in the same areas and at the same times that Amazon is consuming power.

The employee group estimates that 78% of Amazon’s US energy comes from nonrenewable sources and accuses the company of using “creative accounting” to claim it’s reached its clean-electricity goals.

To its credit, Amazon is investing billions of dollars in renewables, electrifying its fleet of delivery vehicles, and otherwise making real strides in reducing its waste and emissions. In addition, it’s lobbying US legislators to make it easier to permit electric transmission projects, funding more reliable forms of carbon removal, and working to diversify its mix of electricity sources. The company also insists it’s being careful and selective about the types of carbon offsets it supports, investing only in “additional, quantifiable, real, permanent, and socially beneficial” projects.

“Amazon is focused on making the grid cleaner and more reliable for everyone,” the company said in response to an inquiry from MIT Technology Review. “An emissions-first approach is the fastest, most cost-effective and scalable way to leverage corporate clean-energy procurement to help decarbonize global power grids. This includes procuring renewable energy in locations and countries that still rely heavily on fossil fuels to power their grids, and where energy projects can have the biggest impact on carbon reduction.”

The company has adopted what’s known as a “carbon matching” approach (which it lays out further here), stressing that it wants to be sure the emissions reduced through its investments in renewables equal or exceed the emissions it continues to produce. 

But a recent study led by Princeton researchers found that carbon matching had a “minimal impact” on long-term power system emissions, because it rarely helps get projects built or clean energy generated where those things wouldn’t have happened anyway.

“It’s an offsetting scheme at its core,” Wilson Ricks, an author of the study and an energy systems researcher at Princeton, said of the method, without commenting on Amazon specifically. 

(Meta, Salesforce, and General Motors have also embraced this model, the study notes.)

The problem in asserting that a company is effectively running entirely on clean electricity, when it’s not doing so directly and may not be doing so completely, is that it takes off any pressure to finish the job for real. 

Backing off claims of carbon neutrality

Google has made its own questionable climate claims over the years as well, and it faces growing challenges as the energy it uses for artificial intelligence soars. 

But it is striving to address its power consumption in arguably more defensible ways and now appears to be taking some notable course-correcting steps, according to its recent sustainability report

Google says that it’s no longer buying carbon credits that purport to prevent emissions. With this change, it has also backed away from the claim that it had already achieved carbon neutrality across its operations years ago.

“We’re no longer procuring carbon avoidance credits year-over-year to compensate for our annual operational emissions,” the company told MIT Technology Review in a statement. “We’re instead focusing on accelerating an array of carbon solutions and partnerships that will help us work toward our net-zero goal, while simultaneously helping develop broader solutions to mitigate climate change.”

Notably, that includes funding the development of more expensive but possibly more reliable ways of pulling greenhouse gas out of the atmosphere through direct air capture machines or other methods. The company pledged $200 million to Frontier, an effort to pay in advance for one billion tons of carbon dioxide that startups will eventually draw down and store. 

Those commitments may not allow the company to make any assertions about its own emissions today, and some of the early-stage approaches it funds might not work at all. But the hope is that these sorts of investments could help stand up a carbon removal industry, which studies find may be essential for keeping warming in check over the coming decades. 

Clean power around the clock

In addition, for several years now Google has worked to purchase or otherwise support generation of clean power in the areas where it operates and across every hour that it consumes electricity—an increasingly popular approach known as 24/7 carbon-free energy.

The idea is that this will stimulate greater development of what grid operators increasingly need: forms of carbon-free energy that can run at all hours of the day (commonly called “firm generation”), matching up with the actual hour-by-hour energy demands of corporations. That can include geothermal plants, nuclear reactors, hydroelectric plants, and more.

More than 150 organizations and governments have now signed the 24/7 Carbon-Free Energy Compact, a pledge to ensure that clean-electricity purchases match up hourly with their consumption. Those include Google, Microsoft, SAP, and Rivian.

The Princeton study notes that hourly matching is more expensive than other approaches but finds that it drives “significant reductions in system-level CO2 emissions” while “incentivizing advanced clean firm generation and long-duration storage technologies that would not otherwise see market uptake.”

In Google’s case, pursuing 24/7 matching has steered the company to support more renewables projects in the areas where it operates and to invest in more energy storage projects. It has also entered into purchase agreements with power plants that can deliver carbon-free electricity around the clock. These include several deals with Fervo Energy, an enhanced-geothermal startup.

The company says its goal is to achieve net-zero emissions across its supply chains by 2030, with all its electricity use synced up, hour by hour, with clean sources across every grid it operates on.

Energy-hungry AI

Which brings us back to the growing problem of AI energy consumption.

Jonathan Koomey, an independent researcher studying the energy demands of computing, argues that the hue and cry over rising electricity use for AI is overblown. He notes that AI accounts for only a sliver of overall energy consumption from information technology, which produces about 1.4% of global emissions.

But major data center companies like Google, Amazon, and others will need to make significant changes to ensure that they stay ahead of rising AI-driven energy use while keeping on track with their climate goals.

They will have to improve overall energy efficiency, procure more clean energy, and use their clout as major employers to push utilities to increase carbon-free generation in the areas where they operate, he says. But the clear focus must be on directly cutting corporate climate pollution, not mucking around with RECs and offsets.

“Reduce your emissions; that’s it,” Koomey says. “We need actual, real, meaningful emissions reductions, not trading around credits that have, at best, an ambiguous effect.”

Google says it’s already making progress on its AI footprint, while stressing that it’s leveraging artificial intelligence to find ways to drive down climate pollution across sectors. Those include efforts like Tapestry, a project within the company’s X “moonshot factory” to create more efficient and reliable electricity grids, as well as a Google Research collaboration to determine airline flight paths that produce fewer heat-trapping cirrus clouds

“AI holds immense promise to drive climate action,” the company said in its report.

The contribution model

The contrasting approaches of Google and Amazon call to mind an instructive hypothetical that a team of carbon market researchers sketched out in a paper this January. They noted that one company could do the hard, expensive work of directly eliminating nearly every ton of its emissions, while another could simply buy cheap offsets to purportedly address all of its own. In that case the first company would have done more actual good for the climate, but only the latter would be able to say it had reached its net-zero target.

Given these challenges and the perverse incentives driving companies toward cheap offsets, the authors have begun arguing for a different approach, known as the “contribution model.”

Like Koomey and others, they stress that companies should dedicate most of their money and energy to directly cutting their emissions as much as possible. But they assert that companies should adopt a new way of dealing with what’s left over (either because that remaining pollution is occurring outside their direct operations or because there are not yet affordable, emissions-free alternatives).

Instead of trying to cancel out every ongoing ton of emissions, a company might pick a percentage of its revenue or set a defensible carbon price on those tons, and then dedicate all that money toward achieving the maximum climate benefit the money can buy, says Libby Blanchard, a research scholar at the University of Cambridge. (She coauthored the paper on the contribution model with Barbara Haya of the University of California, Berkeley, and Bill Anderegg at the University of Utah.)

That could mean funding well-managed forestry projects that help trap carbon dioxide, protect biodiversity, and improve air and water quality. It could mean supporting research and development on the technologies still needed to slow global warming and efforts to scale them up, as Google seems to be doing. Or it could even mean lobbying for stricter climate laws, since few things can drive change as quickly as public policy. 

But the key difference is that the company won’t be able to claim that those actions canceled out every ton of remaining emissions—only that it took real, responsible steps to “contribute” to addressing the problem of climate change. 

The hope is that this approach frees companies to focus on the quality of the projects it funds, not the quantity of cheap offsets it buys, Blanchard says.

It could “replace this race to the bottom with a race to the top,” she says.

As with any approach put before profit-motivated companies that employ ranks of savvy accountants and attorneys, there will surely be ways to abuse this method in the absence of appropriate safeguards and oversight.

And plenty of companies may refuse to adopt it, since they won’t be able to claim they’ve achieved net-zero emissions, which has become the de facto standard for corporate climate action.

But Blanchard says there’s one obvious incentive for them to move away from that goal.

“There’s way less risk that they’ll be sued or accused of greenwashing,” she says.

❌
❌