Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Trump’s win is a tragic loss for climate progress

7 November 2024 at 00:46

Donald Trump’s decisive victory is a stunning setback for the fight against climate change.

The Republican president-elect’s return to the White House means the US is going to squander precious momentum, unraveling hard-won policy progress that was just beginning to pay off, all for the second time in less than a decade. 

It comes at a moment when the world can’t afford to waste time, with nations far off track from any emissions trajectories that would keep our ecosystems stable and our communities safe. Under the policies in place today, the planet is already set to warm by more than 3 °C over preindustrial levels in the coming decades.

Trump could push the globe into even more dangerous terrain, by defanging President Joe Biden’s signature climate laws. In fact, a second Trump administration could boost greenhouse-gas emissions by 4 billion tons through 2030 alone, according to an earlier analysis by Carbon Brief, a well-regarded climate news and data site. That will exacerbate the dangers of heat waves, floods, wildfires, droughts, and famine and increase deaths and disease from air pollution, inflicting some $900 million in climate damages around the world, Carbon Brief found.

I started as the climate editor at MIT Technology Review just as Trump came into office the last time. Much of the early job entailed covering his systematic unraveling of the modest climate policy and progress that President Barack Obama had managed to achieve. I fear it will be far worse this time, as Trump ambles into office feeling empowered and aggrieved, and ready to test the rule of law and crack down on dissent. 

This time his administration will be staffed all the more by loyalists and idealogues, who have already made plans to force civil servants with expertise and experience from federal agencies including the Environmental Protection Agency. He’ll be backed by a Supreme Court that he moved well to the right, and which has already undercut landmark environmental doctrines and weakened federal regulatory agencies. 

This time the setbacks will sting more, too, because the US did finally manage to pass real, substantive climate policy, through the slimmest of congressional margins. The Inflation Reduction Act and Bipartisan Infrastructure Law allocated massive amounts of government funding to accelerating the shift to low-emissions industries and rebuilding the US manufacturing base around a clean-energy economy. 

Trump has made clear he will strive to repeal as many of these provisions as he can, tempered perhaps only by Republicans who recognize that these laws are producing revenue and jobs in their districts. Meanwhile, throughout the prolonged presidential campaign, Trump or his surrogates pledged to boost oil and gas production, eliminate federal support for electric vehicles, end pollution rules for power plants, and remove the US from the Paris climate agreement yet again. Each of those goals stands in direct opposition to the deep, rapid emissions cuts now necessary to prevent the planet from tipping past higher and higher temperature thresholds.

Project 2025, considered a blueprint for the early days of a second Trump administration despite his insistence to the contrary, calls for dismantling or downsizing federal institutions including the the National Oceanic and Atmospheric Administration and the Federal Emergency Management Agency. That could cripple the nation’s ability to forecast, track, or respond to storms, floods, and fires like those that have devastated communities in recent months.

Observers I’ve spoken to fear that the Trump administration will also return the Department of Energy, which under Biden had evolved its mission toward developing low-emissions technologies, to the primary task of helping companies dig up more fossil fuels.

The US election could create global ripples as well, and very soon. US negotiators will meet with their counterparts at the annual UN climate conference that kicks off next week. With Trump set to move back into the White House in January, they will have little credibility or leverage to nudge other nations to step up their commitments to reducing emissions. 

But those are just some of the direct ways that a second Trump administration will enfeeble the nation’s ability to drive down emissions and counter the growing dangers of climate change. He also has considerable power to stall the economy and sow international chaos amid escalating conflicts in Europe and the Middle East. 

Trump’s eagerness to enact tariffs, slash government spending, and deport major portions of the workforce may stunt growth, drive up inflation, and chill investment. All that would make it far more difficult for companies to raise the capital and purchase the components needed to build anything in the US, whether that means wind turbines, solar farms, and seawalls or buildings, bridges, and data centers. 

view from behind Trump on stage election night 2024 with press and crowd
President-elect Donald Trump speaks at an election night event in West Palm Beach, Florida.
WIN MCNAMEE/GETTY IMAGES

His clumsy handling of the economy and international affairs may also help China extend its dominance in producing and selling the components that are crucial to the energy transition, including batteries, EVs, and solar panels, to customers around the globe.

If one job of a commentator is to find some perspective in difficult moments, I admit I’m mostly failing in this one.

The best I can do is to say that there will be some meaningful lines of defense. For now, at least, state leaders and legislatures can continue to pass and implement stronger climate rules. Other nations could step up their efforts to cut emissions and assert themselves as global leaders on climate. 

Private industry will likely continue to invest in and build businesses in climate tech and clean energy, since solar, wind, batteries, and EVs have proved themselves as competitive industries. And technological progress can occur no matter who is sitting in the round room on Pennsylvania Avenue, since researchers continue striving to develop cleaner, cheaper ways of producing our energy, food, and goods.

By any measure, the job of addressing climate change is now much harder. Nothing, however, has changed about the stakes. 

Our world doesn’t end if we surpass 2 °C, 2.5 °C, or even 3 °C, but it will steadily become a more dangerous and erratic place. Every tenth of a degree remains worth fighting for—whether two, four, or a dozen years from now—because every bit of warming that nations pull together to prevent eases future suffering somewhere.

So as the shock wears off and the despair begins to lift, the core task before us remains the same: to push for progress, whenever, wherever, and however we can. 

Why artificial intelligence and clean energy need each other

We are in the early stages of a geopolitical competition for the future of artificial intelligence. The winners will dominate the global economy in the 21st century.

But what’s been too often left out of the conversation is that AI’s huge demand for concentrated and consistent amounts of power represents a chance to scale the next generation of clean energy technologies. If we ignore this opportunity, the United States will find itself disadvantaged in the race for the future of both AI and energy production, ceding global economic leadership to China.

To win the race, the US is going to need access to a lot more electric power to serve data centers. AI data centers could add the equivalent of three New York Cities’ worth of load to the grid by 2026, and they could more than double their share of US electricity consumption—to 9%—by the end of the decade. Artificial intelligence will thus contribute to a spike in power demand that the US hasn’t seen in decades; according to one recent estimate, that demand—previously flat—is growing by around 2.5% per year, with data centers driving as much as 66% of the increase.

Energy-hungry advanced AI chips are behind this growth. Three watt-hours of electricity are required for a ChatGPT query, compared with just 0.3 watt-hours for a simple Google search. These computational requirements make AI data centers uniquely power dense, requiring more power per server rack and orders of magnitude more power per square foot than traditional facilities. Sam Altman, CEO of OpenAI, reportedly pitched the White House on the need for AI data centers requiring five gigawatts of capacity—enough to power over 3 million homes. And AI data centers require steady and reliable power 24 hours a day, seven days a week; they are up and running 99.999% of the year.

The demands that these gigawatt-scale users are placing on the electricity grid are already accelerating far faster than we can expand the physical and political structures that support the development of clean electricity. There are over 1,500 gigawatts of capacity waiting to connect to the grid, and the time to build transmission lines to move that power now stretches into a decade. One illustration of the challenges involved in integrating new power sources: The biggest factor delaying Constellation’s recently announced restart of the Three Mile Island nuclear plant isn’t the facility itself but the time required to connect it to the grid.

The reflexive response to the challenge of scaling clean-electricity supply has been to pose a false choice: cede the United States’ advantage in AI or cede our commitment to clean energy. This logic argues that the only way to meet the growing power demands of the computing economy will involve the expansion of legacy energy resources like natural gas and the preservation of coal-fired power plants.

The dire ecological implications of relying on more fossil fuels are clear. But the economic and security implications are just as serious. Further investments in fossil fuels threaten our national competitiveness as other countries leap ahead in the clean technologies that present the next generation of economic opportunity—markets measured in the trillions.

The reality is that the unprecedented scale and density of power needed for AI require a novel set of generation solutions, able to deliver reliable power 24-7 in ever increasing amounts. While advocates for legacy fuels have historically pointed to the variability of renewables, power sources that require massive, distributed, and disruptable fuel supplies like natural gas are also not the answer. In Texas, natural-gas plants accounted for 70% of outages after a severe winter storm in late 2022. As climate change intensifies, weather-related disruptions are only likely to increase.   

Rather than seeing a choice between AI competitiveness and climate, we see AI’s urgent demand for power density as an opportunity to kick-start a slew of new technologies, taking advantage of new buyers and new market structures—positioning the US to not only seize the AI future but create the markets for the energy-dense technologies that will be needed to power it.

Data centers’ incessant demand for computing power is best matched to a set of novel sources of clean, reliable power that are currently undergoing rapid innovation. Those include advanced nuclear fission that can be rapidly deployed at small scale and next-generation geothermal power that can be deployed anywhere, anytime. One day, the arsenal could include nuclear fusion as a source of nearly limitless clean energy. These technologies can produce large amounts of energy in relatively small footprints, matching AI’s demand for concentrated power. They have the potential to provide stable, reliable baseload power matched to AI data centers’ 24-7 operations. While some of these technologies (like fusion) remain in development, others (like advanced fission and geothermal energy) are ready to deploy today.

AI’s power density requirements similarly necessitate a new set of electricity infrastructure enhancements—like advanced conductors for transmission lines that can move up to 10 times as much power through much smaller areas, cooling infrastructure that can address the heat of vast quantities of energy-hungry chips humming alongside one another, and next-generation transformers that enable the efficient use of higher-voltage power. These technologies offer significant economic benefits to AI data centers in the form of increased access to power and reduced latency, and they will enable the rapid expansion of our 20th-century electricity grid to serve 21st-century needs. 

Moreover, the convergence of AI and energy technologies will allow for faster development and scaling of both sectors. Across the clean-energy sector, AI serves as a method of invention, accelerating the pace of research and development for next-generation materials design. It is also a tool for manufacturing, reducing capital intensity and increasing the pace of scaling. Already, AI is helping us overcome barriers in next-generation power technologies. For instance, Princeton researchers are using it to predict and avoid plasma instabilities that have long been obstacles to sustained fusion reactions. In the geothermal and mining context, AI is accelerating the pace and driving down the cost of commercial-grade resource discovery and development. Other firms use AI to predict and optimize performance of power plants in the field, greatly reducing the capital intensity of projects.

Historically, deployment of novel clean energy technologies has had to rely on utilities, which are notoriously slow to adopt innovations and invest in first-of-a-kind commercial projects. Now, however, AI has brought in a new source of capital for power-generation technologies: large tech companies that are willing to pay a premium for 24-7 clean power and are eager to move quickly.

These “new buyers” can build additional clean capacity in their own backyards. Or they can deploy innovative market structures to encourage utilities to work in new ways to scale novel technologies. Already, we are seeing examples, such as the agreement between Google, the geothermal developer Fervo, and the Nevada utility NV Energy to secure clean, reliable power at a premium for use by data centers. The emergence of these price-insensitive but time-sensitive buyers can accelerate the deployment of clean energy technologies.

The geopolitical implications of this nexus between AI and climate are clear: The socioeconomic fruits of innovation will flow to the countries that win both the AI and the climate race. 

The country that is able to scale up access to reliable baseload power will attract AI infrastructure in the long-run—and will benefit from access to the markets that AI will generate. And the country that makes these investments first will be ahead, and that lead will compound over time as technical progress and economic productivity reinforce each other.

Today, the clean-energy scoreboard tilts toward China. The country has commissioned 37 nuclear power plants over the last decade, while the United States has added two. It is outspending the US two to one on nuclear fusion, with crews working essentially around the clock on commercializing the technology. Given that the competition for AI supremacy boils down to scaling power density, building a new fleet of natural-gas plants while our primary competitor builds an arsenal of the most power-dense energy resources available is like bringing a knife to a gunfight.

The United States and the US-based technology companies at the forefront of the AI economy have the responsibility and opportunity to change this by leveraging AI’s power demand to scale the next generation of clean energy technologies. The question is, will they?

Michael Kearney is a general partner at Engine Ventures, a firm that invests in startups commercializing breakthrough science and engineering. Lisa Hansmann is a principal at Engine Ventures and previously served as special assistant to the president in the Biden administration, working on economic policy and implementation.

Why we need an AI safety hotline

16 September 2024 at 11:00

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

How to sound the alarm

In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

Lowering the stakes

What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

Studying examples elsewhere

The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

❌
❌