Normal view

There are new articles available, click to refresh the page.
Yesterday — 19 September 2024New on MIT Technology Review

The Download: bird flu concerns, and tracking AI’s impact on elections

19 September 2024 at 14:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why virologists are getting increasingly nervous about bird flu

Bird flu has been spreading in dairy cows in the US—and the scale is likely to be far worse than it looks. In addition, 14 human cases have been reported in the US since March. Both are worrying developments, say virologists, who fear that the country’s meager response to the virus is putting the entire world at risk of another pandemic.

Infections in dairy cattle, first reported back in March, brought us a step closer to human spread. Since then, the situation has only deteriorated. The virus appears to have passed from cattle to poultry on multiple occasions, and worse, this form of bird flu that is now spreading among cattle could find its way back into migrating birds. If that’s the case, we can expect these birds to take the virus around the world.


So far, although the virus has mutated, it hasn’t acquired any more dangerous mutations—yet. Read the full story.

—Jessica Hamzelou

AI-generated content doesn’t seem to have swayed recent European elections 

The news: AI-generated falsehoods and deepfakes seem to have had virtually no effect on election results in Europe this year, according to new research. 

The bigger picture: Since the beginning of the generative-AI boom, there has been widespread worry that AI tools could boost bad actors’ ability to spread fake content with the potential to interfere with elections or even sway the results. Those fears now seem unwarranted. The Alan Turing Institute identified just 16 cases of AI-enabled falsehoods or deepfakes that went viral during the UK general election and only 11 cases in the EU and French elections combined, none of which appeared to definitively sway the results. 

Why it matters: These findings are in line with recent warnings from experts that the focus on AI’s role in elections is distracting us from deeper and longer-lasting threats to democracy. Read the full story.

—Melissa Heikkilä

How AI can help spot wildfires

Anything from stray fireworks to lightning strikes can start a wildfire. While it’s natural for many ecosystems to see some level of fire activity, the hotter, drier conditions brought on by climate change are fueling longer fire seasons with larger fires that burn more land.

This means that the need to spot wildfires earlier is becoming ever more crucial. Some groups are turning to technology to help, including a new effort from Google to fund an AI-powered wildfire-spotting satellite constellation

Casey Crownhart, our senior climate reporter, has dug into how this project fits into the world of fire-detection tech and some of the challenges that lie ahead. Read what she found.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

MIT Technology Review Narrated: The entrepreneur dreaming of a factory of unlimited organs

At any given time, the US organ transplant waiting list is about 100,000 people long. Martine Rothblatt sees a day when an unlimited supply of transplantable organs—and 3D-printed ones—will be readily available, saving countless lives.

This is our latest story to be turned into a MIT Technology Review Narrated podcast. In partnership with News Over Audio, we’ll be making a selection of our stories available, each one read by a professional voice actor. You’ll be able to listen to them on the go or download them to listen to offline.

We’re publishing a new story each week on Spotify and Apple Podcasts, including some taken from our most recent print magazine. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Israel’s exploding pagers contained batteries laced with explosives 
The devices started shipping to Lebanon in the summer of 2022. (NYT $)
+ Walkie-talkies detonated across the city yesterday/. (Wired $)
+ Securing electronic supply chains against threats is virtually impossible. (WP $)
+ That doesn’t mean you should fret about your smartphone, though. (The Atlantic $)

2 The European Union has a warning for Apple
Open up your operating systems or face the consequences. (Bloomberg $)

3 The US and allies have thwarted a massive Chinese spy network
The botnet managed to infiltrate sensitive organizations across the world. (WP $)
+ Elsewhere, police have broken into an encrypted criminal app. (440 Media)

4 X temporarily started working in Brazil again
Brazilian officials suspect it was a deliberate technical maneuver. (The Guardian)
+ X, which is banned in Brazil, insists the return was inadvertent. (FT $)

5 This startup wants to flog Greenland’s water to the world
But selling melting glaciers is… not a great look. (Wired $)
+ The radical intervention that might save the “doomsday” glacier. (MIT Technology Review)

6 Spare a thought for laid-off tech workers
It’s really tricky to pin down a new job these days. (WSJ $)
+ People are worried that AI will take everyone’s jobs. We’ve been here before. (MIT Technology Review)

7 How Reddit users are raising awareness of an unusual condition
Retrograde cricopharyngeus dysfunctionon, aka no-burp syndrome, is no joke. (Undark Magazine)

8 Netflix is combing Southeast Asia for its next viral hit
To replicate the success of Squid Game and One Piece. (Rest of World)

9 A new wave of engagement bait videos are deliberately confusing
The more confused you are, the more likely you are to keep watching. (The Atlantic $)
+ How to fix the internet. (MIT Technology Review)

10 Mark Zuckerberg has invested in some serious bling 💎
His new watch costs as much as a Cybertruck. (Insider $)

Quote of the day

“It is natural that people will turn to this new technology to satisfy their fantasies.”

—Ana Ornelas, an erotic author and educator, tells Wired why current discussions around AI regulation exclude sex industry professionals’ perspectives.

The big story

Bringing the lofty ideas of pure math down to earth

April 2023

—Pradeep Niroula

Mathematics has long been presented as a sanctuary from confusion and doubt, a place to go in search of answers. Perhaps part of the mystique comes from the fact that biographies of mathematicians often paint them as otherworldly savants.

As a graduate student in physics, I have seen the work that goes into conducting delicate experiments, but the daily grind of mathematical discovery is a ritual altogether foreign to me. And this feeling is only reinforced by popular books on math, which often take the tone of a pastor dispensing sermons to the faithful.  

Luckily, there are ways to bring it back down to earth. Popular math books seek a fresher take on these old ideas, be it through baking recipes or hot-button political issues. My verdict: Why not? It’s worth a shot. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Take a look at the 50 New York Times’ recipes that have resonated with its readers to become instant classics ($)
+ Do you see green or blue? This test is pretty interesting.
+ Why The Matrix may be closer to fact than fiction than you may have previously believed.
+ Here’s what the pop bangers of the 17th century sounded like.

How AI can help spot wildfires

19 September 2024 at 12:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

In February 2024, a broken utility pole brought down power lines near the small town of Stinnett, Texas. In the following weeks, the fire reportedly sparked by that equipment grew to burn over 1 million acres, the biggest wildfire in the state’s history.

Anything from stray fireworks to lightning strikes can start a wildfire. While it’s natural for many ecosystems to see some level of fire activity, the hotter, drier conditions brought on by climate change are fueling longer fire seasons with larger fires that burn more land.

This means that the need to spot wildfires earlier is becoming ever more crucial, and some groups are turning to technology to help. My colleague James Temple just wrote about a new effort from Google to fund an AI-powered wildfire-spotting satellite constellation. Read his full story for the details, and in the meantime, let’s dig into how this project fits into the world of fire-detection tech and some of the challenges that lie ahead.

The earliest moments in the progression of a fire can be crucial. Today, many fires are reported to authorities by bystanders who happen to spot them and call emergency services. Technologies could help officials by detecting fires earlier, well before they grow into monster blazes.

One such effort is called FireSat. It’s a project from the Earth Fire Alliance, a collaboration between Google’s nonprofit and research arms, the Environmental Defense Fund, Muon Space (a satellite company), and others. This planned system of 52 satellites should be able to spot fires as small as five by five meters (about 16 feet by 16 feet), and images will refresh every 20 minutes.

These wouldn’t be the first satellites to help with wildfire detection, but many existing efforts can either deliver high-resolution images or refresh often—not both, as the new project is aiming to do.

A startup based in Germany, called OroraTech, is also working to launch new satellites that specialize in wildfire detection. The small satellites (around the size of a shoebox) will orbit close to Earth and use sensors that detect heat. The company’s long-term goal is to launch 100 of the satellites into space and deliver images every 30 minutes.

Other companies are staying on Earth, deploying camera stations that can help officials identify, confirm, and monitor fires. Pano AI is using high-tech camera stations to try to spot fires earlier. The company mounts cameras on high vantage points, like the tops of mountains, and spins them around to get a full 360-degree view of the surrounding area. It says the tech can spot wildfire activity within a 15-mile radius. The cameras pair up with algorithms to automatically send an alert to human analysts when a potential fire is detected.

Having more tools to help detect wildfires is great. But whenever I hear about such efforts, I’m struck by a couple of major challenges for this field. 

First, prevention of any sort can often be undervalued, since a problem that never happens feels much less urgent than one that needs to be solved.

Pano AI, which has a few camera stations deployed, points to examples in which its technology detected fires earlier than bystander reports. In one case in Oregon, the company’s system issued a warning 14 minutes before the first emergency call came in, according to a report given to TechCrunch.

Intuitively, it makes sense that catching a blaze early is a good thing. And modeling can show what might have happened if a fire hadn’t been caught early. But it’s really difficult to determine the impact of something that didn’t happen. These systems will need to be deployed for a long time, and researchers will need to undertake large-scale, systematic studies, before we’ll be able to say for sure how effective they are at preventing damaging fires. 

The prospect of cost is also a tricky piece of this for me to wrap my head around. It’s in the public interest to prevent wildfires that will end up producing greenhouse-gas emissions, not to mention endangering human lives. But who’s going to pay for that?

Each of PanoAI’s stations costs something like $50,000 per year. The company’s customers include utilities, which have a vested interest in making sure their equipment doesn’t start fires and watching out for blazes that could damage its infrastructure.

The electric utility Xcel, whose equipment allegedly sparked that fire in Texas earlier this year, is facing lawsuits over its role. And utilities can face huge costs after fires. Last year’s deadly blazes in Hawaii caused billions of dollars in damages, and Hawaiian Electric recently agreed to pay roughly $2 billion for its role in those fires. 

The proposed satellite system from the Earth Fire Alliance will cost more than $400 million all told. The group has secured about two-thirds of what it needs for the first phase of the program, which includes the first four launches, but it’ll need to raise a lot more money to make its AI-powered wildfire-detecting satellite constellation a reality.


Now read the rest of The Spark

Related reading

Read more about how an AI-powered satellite constellation can help spot wildfires faster here

Other companies are aiming to use balloons that will surf on wind currents to track fires. Urban Sky is deploying balloons in Colorado this year

Satellite images can also be used to tally up the damage and emissions caused by fires. Earlier this year I wrote about last year’s Canadian wildfires, which produced more emissions than the fossil fuels in most countries in 2023. 

Another thing

We’re just two weeks away from EmTech MIT, our signature event on emerging technologies. I’ll be on stage speaking with tech leaders on topics like net-zero buildings and emissions from Big Tech. We’ll also be revealing our 2024 list of Climate Tech Companies to Watch. 

For a preview of the event, check out this conversation I had with MIT Technology Review executive editor Amy Nordrum and editor in chief Mat Honan. You can register to join us on September 30 and October 1 at the MIT campus or online—hope to see you there!

Keeping up with climate  

The US Postal Service is finally getting its long-awaited electric vehicles. They’re funny-looking, and the drivers seem to love them already. (Associated Press)

→ Check out this timeline I made in December 2022 of the multi-year saga it took for the agency to go all in on EVs. (MIT Technology Review)

Microsoft is billing itself as a leader in AI for climate innovation. At the same time, the tech giant is selling its technology to oil and gas companies. Check out this fascinating investigation from my former colleague Karen Hao. (The Atlantic)

Imagine solar panels that aren’t affected by a cloudy day … because they’re in space. Space-based solar power sounds like a dream, but advances in solar tech and falling launch costs have proponents arguing that it’s a dream closer than ever to becoming reality. Many are still skeptical. (Cipher)

Norway is the first country with more EVs on the road than gas-powered cars. Diesel vehicles are still the most common, though. (Washington Post

The emissions cost of delivering Amazon packages keeps ticking up. A new report from Stand.earth estimates that delivery emissions have increased by 75% since just 2019. (Wired)

BYD has been dominant in China’s EV market. The company is working to expand, but to compete in the UK and Europe, it will need to win over wary drivers. (Bloomberg)

Some companies want to make air-conditioning systems in big buildings smarter to help cut emissions. Grid-interactive efficient buildings can cut energy costs and demand at peak hours. (Canary Media)

Why virologists are getting increasingly nervous about bird flu

19 September 2024 at 10:53

Bird flu has been spreading in dairy cows in the US—and the scale of the spread is likely to be far worse than it looks. In addition, 14 human cases have been reported in the US since March. Both are worrying developments, say virologists, who fear that the country’s meager response to the virus is putting the entire world at risk of another pandemic.

The form of bird flu that has been spreading over the last few years has been responsible for the deaths of millions of birds and tens of thousands of marine and land mammals. But infections in dairy cattle, first reported back in March, brought us a step closer to human spread. Since then, the situation has only deteriorated. The virus appears to have passed from cattle to poultry on multiple occasions. “If that virus sustains in dairy cattle, they will have a problem in their poultry forever,” says Thomas Peacock, a virologist at the Pirbright Institute in Woking, UK.

Worse, this form of bird flu that is now spreading among cattle could find its way back into migrating birds. It might have happened already. If that’s the case, we can expect these birds to take the virus around the world.

“It’s really troubling that we’re not doing enough right now,” says Seema Lakdawala, a virologist at the Emory University School of Medicine in Atlanta, Georgia. “I am normally very moderate in terms of my pandemic-scaredness, but the introduction of this virus into cattle is really troubling.”

Not just a flu for birds

Bird flu is so named because it spreads stably in birds. The type of H5N1 that has been decimating bird populations for the last few years was first discovered in the late 1990s. But in 2020, H5N1 began to circulate in Europe “in a big way,” says Peacock. The virus spread globally, via migrating ducks, geese, and other waterfowl. In a process that took months and years, the virus made it to the Americas, Africa, Asia, and eventually even Antarctica, where it was detected earlier this year.

And while many ducks and geese seem to be able to survive being infected with the virus, other bird species are much more vulnerable. H5N1 is especially deadly for chickens, for example—their heads swell, they struggle to breathe, and they experience extreme diarrhea. Seabirds like puffins and guillemots also seem to be especially susceptible to the virus, although it’s not clear why. Over the last few years, we’ve seen the worst ever outbreak of bird flu in birds. Millions of farmed birds have died, and an unknown number of wild birds—in the tens of thousands at the very least—have also succumbed. “We have no idea how many just fell into the sea and were never seen again,” says Peacock.

Alarmingly, animals that hunt and scavenge affected birds have also become infected with the virus. The list of affected mammals includes bears, foxes, skunks, otters, dolphins, whales, sea lions, and many more. Some of these animals appear to be able to pass the virus to other members of their species. In 2022, an outbreak of H5N1 in sea lions that started in Chile spread to Argentina and eventually to Uruguay and Brazil. At least 30,000 died. The sea lions may also have passed the virus to nearby elephant seals in Argentina, around 17,000 of which have succumbed to the virus.

This is bad news—not just for the affected animals, but for people, too. It’s not just a bird flu anymore. And when a virus can spread in other mammals, it’s a step closer to being able to spread in humans. That is even more likely when the virus spreads in an animal that people tend to spend a lot of time interacting with.

This is partly why the virus’s spread in dairy cattle is so troubling. The form of the virus that is spreading in cows is slightly different from the one that had been circulating in migrating birds, says Lakdawala. The mutations in this virus have likely enabled it to spread more easily among the animals.

Evidence suggests that the virus is spreading through the use of shared milking machinery within cattle herds. Infected milk can contaminate the equipment, allowing the virus to infect the udder of another cow. The virus is also spreading between herds, possibly by hitching a ride on people that work on multiple farms, or via other animals, or potentially via airborne droplets.

Milk from infected cows can look thickened and yogurt-like, and farmers tend to pour it down drains. This ends up irrigating farms, says Lakdawala. “Unless the virus is inactivated, it just remains infectious in the environment,” she says. Other animals could be exposed to the virus this way.

Hidden infections

So far, 14 states have reported a total of 208 infected cattle herds. Some states have reported only one or two cases among their cattle. But this is extremely unlikely to represent the full picture, given how rapidly the virus is spreading among herds in states that are doing more testing, says Peacock. In Colorado, where state-licensed dairy farms that sell pasteurized milk are required to submit milk samples for weekly testing, 64 herds have been reported to be affected. Neighboring Wyoming, which does not have the same requirements, has reported only one affected herd.

We don’t have a good idea of how many people have been infected either, says Lakdawala. The official count from the CDC is 14 people since April 2024, but testing is not routine, and because symptoms are currently fairly mild in people, we’re likely to be missing a lot of cases.

“It’s very frustrating, because there are just huge gaps in the data that’s coming out,” says Peacock. “I don’t think it’s unfair to say that a lot of outside observers don’t think this outbreak is being taken particularly seriously.”

And the virus is already spreading from cows back into wild birds and poultry, says Lakdawala: “There is definitely a concern that the virus is going to [become more widespread] in birds and cattle … but also other animals that ruminate, like goats.”

It may already be too late to rid America’s cattle herds of the bird flu virus. If it continues to circulate, it could become stable in the population. This is what has happened with flu in pigs around the world. That could also spell disaster—not only would the virus represent a constant risk to humans and other animals that come into contact with the cows, but it could also evolve over time. We can’t predict how this evolution might take shape, but there’s a chance the result could be a form of the virus that is better at spreading in people or causing fatal infections.

So far, it is clear that the virus has mutated but hasn’t yet acquired any of these more dangerous mutations, says Michael Tisza, a bioinformatics scientist at Baylor College of Medicine in Houston. That being said, Tisza and his colleagues have been looking for the virus in wastewater from 10 cities in Texas—and they have found H5N1 in all of them.

Tisza and his colleagues don’t know where this virus is coming from—whether it’s coming from birds, milk, or infected people, for example. But the team didn’t find any signal of the virus in wastewater during 2022 or 2023, when there were outbreaks in migratory birds and poultry. “In 2024, it’s been a different story,” says Tisza. “We’ve seen it a lot.”

Together, the evidence that the virus is evolving and spreading among mammals, and specifically cattle, has put virologists on high alert. “This virus is not causing a human pandemic right now, which is great,” says Tisza. “But it is a virus of pandemic potential.”

AI-generated content doesn’t seem to have swayed recent European elections 

19 September 2024 at 01:01

AI-generated falsehoods and deepfakes seem to have had no effect on election results in the UK, France, and the European Parliament this year, according to new research. 

Since the beginning of the generative-AI boom, there has been widespread fear that AI tools could boost bad actors’ ability to spread fake content with the potential to interfere with elections or even sway the results. Such worries were particularly heightened this year, when billions of people were expected to vote in over 70 countries. 

Those fears seem to have been unwarranted, says Sam Stockwell, the researcher at the Alan Turing Institute who conducted the study. He focused on three elections over a four-month period from May to August 2024, collecting data on public reports and news articles on AI misuse. Stockwell identified 16 cases of AI-enabled falsehoods or deepfakes that went viral during the UK general election and only 11 cases in the EU and French elections combined, none of which appeared to definitively sway the results. The fake AI content was created by both domestic actors and groups linked to hostile countries such as Russia. 

These findings are in line with recent warnings from experts that the focus on election interference is distracting us from deeper and longer-lasting threats to democracy.   

AI-generated content seems to have been ineffective as a disinformation tool in most European elections this year so far. This, Stockwell says, is because most of the people who were exposed to the disinformation already believed its underlying message (for example, that levels of immigration to their country are too high). Stockwell’s analysis showed that people who were actively engaging with these deepfake messages by resharing and amplifying them had some affiliation or previously expressed views that aligned with the content. So the material was more likely to strengthen preexisting views than to influence undecided voters. 

Tried-and-tested election interference tactics, such as flooding comment sections with bots and exploiting influencers to spread falsehoods, remained far more effective. Bad actors mostly used generative AI to rewrite news articles with their own spin or to create more online content for disinformation purposes. 

“AI is not really providing much of an advantage for now, as existing, simpler methods of creating false or misleading information continue to be prevalent,” says Felix Simon, a researcher at the Reuters Institute for Journalism, who was not involved in the research. 

However, it’s hard to draw firm conclusions about AI’s impact upon elections at this stage, says Samuel Woolley, a disinformation expert at the University of Pittsburgh. That’s in part because we don’t have enough data.

“There are less obvious, less trackable, downstream impacts related to uses of these tools that alter civic engagement,” he adds.

Stockwell agrees: Early evidence from these elections suggests that AI-generated content could be more effective for harassing politicians and sowing confusion than changing people’s opinions on a large scale. 

Politicians in the UK, such as former prime minister Rishi Sunak, were targeted by AI deepfakes that, for example, showed them promoting scams or admitting to financial corruption. Female candidates were also targeted with nonconsensual sexual deepfake content, intended to disparage and intimidate them. 

“There is, of course, a risk that in the long run, the more that political candidates are on the receiving end of online harassment, death threats, deepfake pornographic smears—that can have a real chilling effect on their willingness to, say, participate in future elections, but also obviously harm their well-being,” says Stockwell. 

Perhaps more worrying, Stockwell says, his research indicates that people are increasingly unable to discern the difference between authentic and AI-generated content in the election context. Politicians are also taking advantage of that. For example, political candidates in the European Parliament elections in France have shared AI-generated content amplifying anti-immigration narratives without disclosing that they’d been made with AI. 

“This covert engagement, combined with a lack of transparency, presents in my view a potentially greater risk to the integrity of political processes than the use of AI by the general population or so-called ‘bad actors,’” says Simon. 

Before yesterdayNew on MIT Technology Review

The Download: Congress’s AI bills, and Snap’s new AR spectacles

18 September 2024 at 14:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

There are more than 120 AI bills in Congress right now

More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress. This flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. 

Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at them all can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

That’s why, with help from the Brennan Center for Justice, we’ve created a tracker with all the AI bills circulating in various committees in Congress right now, to see if there’s anything we can learn from this legislative smorgasbord. Read the full story.

—Scott J Mulligan

Here’s what I made of Snap’s new augmented-reality Spectacles

Snap has announced a new version of its Spectacles: AR glasses that could finally deliver on the promises that devices like Magic Leap, or HoloLens, or even Google Glass, made many years ago.

Our editor-in-chief Mat Honan got to try them out a couple of weeks ago. He found they packed a pretty impressive punch layering visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world—if you don’t mind looking a little goofy, that is. Read Mat’s full thoughts here.

Google is funding an AI-powered satellite constellation that will spot wildfires faster

What’s happening: Early next year, Google and its partners plan to launch the first in a series of satellites that together would provide close-up, frequently refreshed images of wildfires around the world, offering data that could help firefighters battle blazes more rapidly, effectively, and safely.

Why it matters: The images and analysis will be provided free to fire agencies around the world, helping to improve understanding of where fires are, where they’re moving, and how hot they’re burning. The information could help agencies stamp out small fires before they turn into raging infernos, place limited firefighting resources where they’ll do the most good, and evacuate people along the safest paths. Read the full story.

—James Temple

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 California has passed three election deepfake laws
But only one will take effect in time for the presidential election in November. (NYT $)
+ The bills also protect actors from AI impersonation without their consent. (WP $)

2 How did thousands of Hezbollah pagers explode simultaneously?
The devices were probably intercepted by hackers during shipment. (WSJ $)
+ Here’s everything we know about the attack so far. (Vox)
+ Small lithium batteries alone don’t tend to cause this much damage. (404 Media)
+ Exploding comms devices are nothing new. (FT $)

3 Instagram has introduced new accounts specifically for teens
In response to increasing pressure over Meta’s minor protection policies. (BBC)
+ Parents will be given greater control over their activities. (The Guardian)
+ Here’s how to set up the new restricted accounts. (WP $)

4 Google has won its bid to overturn a €1.5 billion fine from the EU
But the court said it stands by the majority of the previous findings. (CNBC)
+ But the ruling can still be appealed in the Court of Justice. (Bloomberg $)
+ Meanwhile, Meta’s antitrust woes are escalating. (FT $)

5 SpaceX has been accused of breaking launch rules 
And the US Federal Aviation Administration wants to slap it with a hefty fine. (WP $)

6 Electric cars now outnumber petrol cars in Norway
It’s particularly impressive given the country’s history as an oil producer. (The Guardian)
+ Why full EVs, not hybrids, are the future. (Economist $)
+ Three frequently asked questions about EVs, answered. (MIT Technology Review)

7 Our understanding of the universe is still up in the air
What looked like a breakthrough in physics actually might not be at all. (New Scientist $)
+ Why is the universe so complex and beautiful? (MIT Technology Review)

8 Tech’s middle managers are having a tough time
They’re losing their jobs left, right and center. (Insider $)

9 YouTube astrology is booming in Pakistan
Amid economic and political turmoil, Pakistanis are seeking answers in the stars. (Rest of World)

10 Not everything bad is AI-generated
But what’s AI-generated is often bad. (NY Mag $)

Quote of the day

“I’d rather go back to school than work in an office again.”

—CJ Felli, a system development engineer for Amazon Web Services, is not happy about the company’s back-to-the-office directive, Quartz reports.

The big story

What’s next for the world’s fastest supercomputers

September 2023

When the Frontier supercomputer came online last year, it marked the dawn of so-called exascale computing, with machines that can execute an exaflop—or a quintillion (1018) floating point operations a second.

Since then, scientists have geared up to make more of these blazingly fast computers: several exascale machines are due to come online in the US and Europe in 2024.

But speed itself isn’t the endgame. Researchers hope to pursue previously unanswerable questions about nature—and to design new technologies in areas from transportation to medicine. Read the full story.

—Sophia Chen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These Ocean Photographer of the Year winning images are simply stunning 🐋($)
+ Here’s where you’ll have the best chance of finding a fossilized shark tooth in the US.
+ Vans are back in style, as if they ever went out of it.
+ Potatoes are great every which way, but here’s how long to boil them for that perfect al dente bite.

There are more than 120 AI bills in Congress right now

18 September 2024 at 11:30

More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress.

They’re pretty varied. One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training.  Three deal with mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.

The flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. “There is a sense of urgency. There’s a commitment to addressing this issue, because it is developing so quickly and because it is so crucial to our economy,” says Heather Vaughan, director of communications for the US House of Representatives Committee on Science, Space, and Technology.

Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at all the different bills that are in motion can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord. 

As you can see, it can seem as if Congress is trying to do everything at once when it comes to AI. To get a better sense of what may actually pass, it’s useful to look at what bills are moving along to potentially become law. 

A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don’t feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill’s sponsor usually needs to get some members of the opposite party to support it for it to pass. In the current polarized US political climate, that task can be herculean. 

Congress has passed legislation on artificial intelligence before. Back in 2020, the National AI Initiative Act was part of the Defense Authorization Act, which invested resources in AI research and provided support for public education and workforce training on AI.

And some of the current bills are making their way through the system. The Senate Commerce Committee pushed through five AI-related bills at the end of July. The bills focused on authorizing the newly formed US AI Safety Institute (AISI) to create test beds and voluntary guidelines for AI models. The other bills focused on expanding education on AI, establishing public computing resources for AI research, and criminalizing the publication of deepfake pornography. The next step would be to put the bills on the congressional calendar to be voted on, debated, or amended.

“The US AI Safety Institute, as a place to have consortium building and easy collaboration between corporate and civil society actors, is amazing. It’s exactly what we need,” says Yacine Jernite, an AI researcher at Hugging Face.

The progress of these bills is a positive development, says Varun Krovi, executive director of the Center for AI Safety Action Fund. “We need to codify the US AI Safety Institute into law if you want to maintain our leadership on the global stage when it comes to standards development,” he says. “And we need to make sure that we pass a bill that provides computing capacity required for startups, small businesses, and academia to pursue AI.”

Following the Senate’s lead, the House Committee on Science, Space, and Technology just passed nine more bills regarding AI on September 11. Those bills focused on improving education on AI in schools, directing the National Institute of Standards and Technology (NIST) to establish guidelines for artificial-intelligence systems, and expanding the workforce of AI experts. These bills were chosen because they have a narrower focus and thus might not get bogged down in big ideological battles on AI, says Vaughan.

“It was a day that culminated from a lot of work. We’ve had a lot of time to hear from members and stakeholders. We’ve had years of hearings and fact-finding briefings on artificial intelligence,” says Representative Haley Stevens, one of the Democratic members of the House committee.

Many of the bills specify that any guidance they propose for the industry is nonbinding and that the goal is to work with companies to ensure safe development rather than curtail innovation. 

For example, one of the bills from the House, the AI Development Practices Act, directs NIST to establish “voluntary guidance for practices and guidelines relating to the development … of AI systems” and a “voluntary risk management framework.” Another bill, the AI Advancement and Reliability Act, has similar language. It supports “the development of voluntary best practices and technical standards” for evaluating AI systems. 

“Each bill contributes to advancing AI in a safe, reliable, and trustworthy manner while fostering the technology’s growth and progress through innovation and vital R&D,” committee chairman Frank Lucas, an Oklahoma Republican, said in a press release on the bills coming out of the House.

“It’s emblematic of the approach that the US has taken when it comes to tech policy. We hope that we would move on from voluntary agreements to mandating them,” says Krovi.

Avoiding mandates is a practical matter for the House committee. “Republicans don’t go in for mandates for the most part. They generally aren’t going to go for that. So we would have a hard time getting support,” says Vaughan. “We’ve heard concerns about stifling innovation, and that’s not the approach that we want to take.” When MIT Technology Review asked about the origin of these concerns, they were attributed to unidentified “third parties.” 

And fears of slowing innovation don’t just come from the Republican side. “What’s most important to me is that the United States of America is establishing aggressive rules of the road on the international stage,” says Stevens. “It’s concerning to me that actors within the Chinese Communist Party could outpace us on these technological advancements.”

But these bills come at a time when big tech companies have ramped up lobbying efforts on AI. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says David Evan Harris, who teaches courses on AI ethics at the University of California, Berkeley. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.”

“A [voluntary commitment] is something that is also only accessible to the largest companies,” says Jernite at Hugging Face, claiming that sometimes the ambiguous nature of voluntary commitments allows big companies to set definitions for themselves. “If you have a voluntary commitment—that is, ‘We’re going to develop state-of-the-art watermarking technology’—you don’t know what state-of-the-art means. It doesn’t come with any of the concrete things that make regulation work.”

“We are in a very aggressive policy conversation about how to do this right, and how this carrot and stick is actually going to work,” says Stevens, indicating that Congress may ultimately draw red lines that AI companies must not cross.

There are other interesting insights to be gleaned from looking at the bills all together. Two-thirds of the AI bills are sponsored by Democrats. This isn’t too surprising, since some House Republicans have claimed to want no AI regulations, believing that guardrails will slow down progress.

The topics of the bills (as specified by Congress) are dominated by science, tech, and communications (28%), commerce (22%), updating government operations (18%), and national security (9%). Topics that don’t receive much attention include labor and employment (2%), environmental protection (1%), and civil rights, civil liberties, and minority issues (1%).

The lack of a focus on equity and minority issues came into view during the Senate markup session at the end of July. Senator Ted Cruz, a Republican, added an amendment that explicitly prohibits any action “to ensure inclusivity and equity in the creation, design, or development of the technology.” Cruz said regulatory action might slow US progress in AI, allowing the country to fall behind China.

On the House side, there was also a hesitation to work on bills dealing with biases in AI models. “None of our bills are addressing that. That’s one of the more ideological issues that we’re not moving forward on,” says Vaughan.

The lead Democrat on the House committee, Representative Zoe Lofgren, told MIT Technology Review, “It is surprising and disappointing if any of my Republican colleagues have made that comment about bias in AI systems. We shouldn’t tolerate discrimination that’s overt and intentional any more than we should tolerate discrimination that occurs because of bias in AI systems. I’m not really sure how anyone can argue against that.”

After publication, Vaughan clarified that “[Bias] is one of the bigger, more cross-cutting issues, unlike the narrow, practical bills we considered that week. But we do care about bias as an issue,” and she expects it to be addressed within an upcoming House Task Force report.

One issue that may rise above the partisan divide is deepfakes. The Defiance Act, one of several bills addressing them, is cosponsored by a Democratic senator, Amy Klobuchar, and a Republican senator, Josh Hawley. Deepfakes have already been abused in elections; for example, someone faked Joe Biden’s voice for a robocall to tell citizens not to vote. And the technology has been weaponized to victimize people by incorporating their images into pornography without their consent. 

“I certainly think that there is more bipartisan support for action on these issues than on many others,” says Daniel Weiner, director of the Brennan Center’s Elections & Government Program. “But it remains to be seen whether that’s going to win out against some of the more traditional ideological divisions that tend to arise around these issues.” 

Although none of the current slate of bills have resulted in laws yet, the task of regulating any new technology, and specifically advanced AI systems that no one entirely understands, is difficult. The fact that Congress is making any progress at all may be surprising in itself. 

“Congress is not sleeping on this by any stretch of the means,” says Stevens. “We are evaluating and asking the right questions and also working alongside our partners in the Biden-Harris administration to get us to the best place for the harnessing of artificial intelligence.”

Update: We added further comments from the Republican spokesperson.

Here’s what I made of Snap’s new augmented-reality Spectacles

By: Mat Honan
17 September 2024 at 20:04

Before I get to Snap’s new Spectacles, a confession: I have a long history of putting goofy new things on my face and liking it. Back in 2011, I tried on Sony’s head-mounted 3D glasses and, apparently, enjoyed them. Sort of. At the beginning of 2013, I was enamored with a Kickstarter project I saw at CES called Oculus Rift. I then spent the better part of the year with Google’s ridiculous Glass on my face and thought it was the future. Microsoft HoloLens? Loved it. Google Cardboard? Totally normal. Apple Vision Pro? A breakthrough, baby. 

Anyway. Snap announced a new version of its Spectacles today. These are AR glasses that could finally deliver on the promises devices like Magic Leap, or HoloLens, or even Google Glass, made many years ago. I got to try them out a couple of weeks ago. They are pretty great! (But also: See above)

These fifth-generation Spectacles can display visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world. The interface is powered by the company’s new operating system, Snap OS. Unlike typical VR headsets or spatial computing devices, these augmented-reality (AR) lenses don’t obscure your vision and re-create it with cameras. There is no screen covering your field of view. Instead, images appear to float and exist in three dimensions in the world around you, hovering in the air or resting on tables and floors.

Snap CTO Bobby Murphy described the intended result to MIT Technology Review as “computing overlaid on the world that enhances our experience of the people in the places that are around us, rather than isolating us or taking us out of that experience.” 

In my demo, I was able to stack Lego pieces on a table, smack an AR golf ball into a hole across the room (at least a triple bogey), paint flowers and vines across the ceilings and walls using my hands, and ask questions about the objects I was looking at and receive answers from Snap’s virtual AI chatbot. There was even a little purple virtual doglike creature from Niantic, a Peridot, that followed me around the room and outside onto a balcony. 

But look up from the table and you see a normal room. The golf ball is on the floor, not a virtual golf course. The Peridot perches on a real balcony railing. Crucially, this means you can maintain contact—including eye contact—with the people around you in the room. 

To accomplish all this, Snap packed a lot of tech into the frames. There are two processors embedded inside, so all the compute happens in the glasses themselves. Cooling chambers in the sides did an effective job of dissipating heat in my demo. Four cameras capture the world around you, as well as the movement of your hands for gesture tracking. The images are displayed via micro-projectors, similar to those found in pico projectors, that do a nice job of presenting those three-dimensional images right in front of your eyes without requiring a lot of initial setup. It creates a tall, deep field of view—Snap claims it is similar to a 100-inch display at 10 feet—in a relatively small, lightweight device (226 grams). What’s more, they automatically darken when you step outside, so they work well not just in your home but out in the world.

You control all this with a combination of voice and hand gestures, most of which came pretty naturally to me. You can pinch to select objects and drag them around, for example. The AI chatbot could respond to questions posed in natural language (“What’s that ship I see in the distance?”). Some of the interactions require a phone, but for the most part Spectacles are a standalone device. 

It doesn’t come cheap. Snap isn’t selling the glasses directly to consumers but requires you to agree to at least one year of paying $99 per month for a Spectacles Developer Program account that gives you access to them. I was assured that the company has a very open definition of who can develop for the platform. Snap also announced a new partnership with OpenAI that takes advantage of its multimodal capabilities, which it says will help developers create experiences with real-world context about the things people see or hear (or say).

The author of the post standing outside wearing oversize Snap Spectacles. The photo is a bit goofy
It me.

Having said that, it all worked together impressively well. The three-dimensional objects maintained a sense of permanence in the spaces where you placed them—meaning you can move around and they stay put. The AI assistant correctly identified everything I asked it to. There were some glitches here and there—Lego bricks collapsing into each other, for example—but for the most part this was a solid little device. 

It is not, however, a low-profile one. No one will mistake these for a normal pair of glasses or sunglasses. A colleague described them as beefed-up 3D glasses, which seems about right. They are not the silliest computer I have put on my face, but they didn’t exactly make me feel like a cool guy, either. Here’s a photo of me trying them out. Draw your own conclusions.

The Download: OpenAI’s latest model, and 4D printing’s potential

17 September 2024 at 14:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why OpenAI’s new model is such a big deal

Last week OpenAI released a new model called o1 (previously referred to under the code name “Strawberry” and, before that, Q*) that blows GPT-4o out of the water.

Unlike previous models that are well suited for language tasks like writing and editing, OpenAI o1 is focused on multistep “reasoning,” the type of process required for advanced mathematics, coding, or other STEM-based questions. The model is also trained to answer PhD-level questions in subjects ranging from astrophysics to organic chemistry.

The bulk of LLM progress until now has been language-driven, but in addition to getting lots of facts wrong, such LLMs have failed to demonstrate the types of skills required to solve important problems in fields like drug discovery, materials science, coding, or physics. OpenAI’s o1 is one of the first signs that LLMs might soon become genuinely helpful companions to human researchers in these fields. Read the full story.

—James O’Donnell

This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.

This designer creates magic from everyday materials

Back in 2012, designer and computer scientist Skylar Tibbits started working on 3D-printed materials that could change their shape or properties after being printed—a concept that Tibbits dubbed “4D printing,” where the fourth dimension is time.

Today, 4D printing is its own field—the subject of a professional society and thousands of papers, with researchers around the world looking into potential applications from self-adjusting biomedical devices to soft robotics.

But not long after 4D printing took off, Tibbits was already looking toward a new challenge: What other capabilities can we build into materials? And can we do that without printing? Read the full story.

—Anna Gibbs

This piece is from the latest print issue of MIT Technology Review, which celebrates 125 years of the magazine! If you don’t already, subscribe now to get 25% off future copies once they land.

A special preview of EmTech MIT: AI, Climate, and the new rules of business

Artificial intelligence and climate technologies are the two greatest forces impacting business decisions today. This year at EmTech MIT, our annual flagship conference, we examine the breakthroughs, concerns, and the near-future possibilities brought on by AI, as well as the climate technologies building the green economy.

Register here to join us at 12.30pm ET today for a LinkedIn event previewing everything you can expect from this year’s event.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok had a rough day in court

Federal judges questioned its argument that Congress lacks the authority to ban it. (NYT $)
+ If TikTok doesn’t break from its parent company, it’ll be banned on January 19 2025. (FT $)
+ There’s a good chance TikTok may have to escalate its fight to the Supreme Court. (Bloomberg $)
+ The depressing truth about TikTok’s impending ban. (MIT Technology Review)

2 Intel could receive up to $3 billion in chip grants
To manufacture chips for the US military. (Bloomberg $)
+ Intel’s contract manufacturing business has inked a deal with Amazon. (Reuters)
+ What’s next in chips. (MIT Technology Review)

3 Apple’s new iOS 18 software is here
But Apple Intelligence, its suite of AI tools, is nowhere to be seen. (WSJ $)
+ The new software is much more customizable than previous versions. (Ars Technica)
+ Here are the best features worth paying attention to. (NYT $)

4 Donald Trump has launched a new cryptocurrency business
The venture looks an awful lot like a play to the crypto faithful. (CNN)
+ Trump doesn’t seem to know a great deal about crypto. (Reuters)
+ Opportunists are already taking advantage of Trump’s fans. (The Verge)

5 More Meta smartglasses are likely to be on their way
The company signed a 10-year extension deal with glasses maker EssilorLuxottica. (Reuters)

6 Working in a data center is like firefighting
Human workers are constantly on the lookout for technical issues. (WP $)

7 Googling one of art’s most famous paintings returned AI slop
Users searching for Hieronymus Bosch’s Garden of Earthly Delights were met with AI-generated garbage. (404 Media)
+ Why artists are becoming less scared of AI. (MIT Technology Review)

8 An Nvidia GPU purse can be yours for $1,024
Hype? What hype? (Insider $)
+ The company’s stranglehold on the chip industry is being closely watched. (IEEE Spectrum)

9 Can you tell blue and green apart? 🔵 🟢
A new viral test plays with our personal color perception. (The Guardian)  

10 The latest YouTube trend? 80s weather reports
Set to dreamy soundtracks. (Wired $)

Quote of the day

“The speech on TikTok is not Chinese speech. It is American speech.”

—Jeffrey Fisher, a lawyer arguing on behalf of TikTok content creators, argues that banning the app in the US could violate the rights of Americans, the BBC reports.

The big story

Psychedelics are having a moment and women could be the ones to benefit

August 2022

Psychedelics are having a moment. After decades of prohibition and vilification, they are increasingly being employed as therapeutics. Drugs like ketamine, MDMA, and psilocybin mushrooms are being studied in clinical trials to treat depression, substance abuse, and a range of other maladies.

And as these long-taboo drugs stage a comeback in the scientific community, it’s possible they could be especially promising for women. Read the full story.

—Taylor Majewski

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The artists who created these Star Wars matte paintings for the films’ futuristic backdrops were supremely talented.
+ Madrid really loves crisps (or potato chips, to the uninitiated)
+ #Restock videos are all the rage these days.
+ How to enjoy the great outdoors without getting lost.

Why OpenAI’s new model is such a big deal

17 September 2024 at 10:59

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

Last weekend, I got married at a summer camp, and during the day our guests competed in a series of games inspired by the show Survivor that my now-wife and I orchestrated. When we were planning the games in August, we wanted one station to be a memory challenge, where our friends and family would have to memorize part of a poem and then relay it to their teammates so they could re-create it with a set of wooden tiles. 

I thought OpenAI’s GPT-4o, its leading model at the time, would be perfectly suited to help. I asked it to create a short wedding-themed poem, with the constraint that each letter could only appear a certain number of times so we could make sure teams would be able to reproduce it with the provided set of tiles. GPT-4o failed miserably. The model repeatedly insisted that its poem worked within the constraints, even though it didn’t. It would correctly count the letters only after the fact, while continuing to deliver poems that didn’t fit the prompt. Without the time to meticulously craft the verses by hand, we ditched the poem idea and instead challenged guests to memorize a series of shapes made from colored tiles. (That ended up being a total hit with our friends and family, who also competed in dodgeball, egg tosses, and capture the flag.)    

However, last week OpenAI released a new model called o1 (previously referred to under the code name “Strawberry” and, before that, Q*) that blows GPT-4o out of the water for this type of purpose

Unlike previous models that are well suited for language tasks like writing and editing, OpenAI o1 is focused on multistep “reasoning,” the type of process required for advanced mathematics, coding, or other STEM-based questions. It uses a “chain of thought” technique, according to OpenAI. “It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working,” the company wrote in a blog post on its website.

OpenAI’s tests point to resounding success. The model ranks in the 89th percentile on questions from the competitive coding organization Codeforces and would be among the top 500 high school students in the USA Math Olympiad, which covers geometry, number theory, and other math topics. The model is also trained to answer PhD-level questions in subjects ranging from astrophysics to organic chemistry. 

In math olympiad questions, the new model is 83.3% accurate, versus 13.4% for GPT-4o. In the PhD-level questions, it averaged 78% accuracy, compared with 69.7% from human experts and 56.1% from GPT-4o. (In light of these accomplishments, it’s unsurprising the new model was pretty good at writing a poem for our nuptial games, though still not perfect; it used more Ts and Ss than instructed to.)

So why does this matter? The bulk of LLM progress until now has been language-driven, resulting in chatbots or voice assistants that can interpret, analyze, and generate words. But in addition to getting lots of facts wrong, such LLMs have failed to demonstrate the types of skills required to solve important problems in fields like drug discovery, materials science, coding, or physics. OpenAI’s o1 is one of the first signs that LLMs might soon become genuinely helpful companions to human researchers in these fields. 

It’s a big deal because it brings “chain-of-thought” reasoning in an AI model to a mass audience, says Matt Welsh, an AI researcher and founder of the LLM startup Fixie. 

“The reasoning abilities are directly in the model, rather than one having to use separate tools to achieve similar results. My expectation is that it will raise the bar for what people expect AI models to be able to do,” Welsh says.

That said, it’s best to take OpenAI’s comparisons to “human-level skills” with a grain of salt, says Yves-Alexandre de Montjoye, an associate professor in math and computer science at Imperial College London. It’s very hard to meaningfully compare how LLMs and people go about tasks such as solving math problems from scratch.

Also, AI researchers say that measuring how well a model like o1 can “reason” is harder than it sounds. If it answers a given question correctly, is that because it successfully reasoned its way to the logical answer? Or was it aided by a sufficient starting point of knowledge built into the model? The model “still falls short when it comes to open-ended reasoning,” Google AI researcher François Chollet wrote on X.

Finally, there’s the price. This reasoning-heavy model doesn’t come cheap. Though access to some versions of the model is included in premium OpenAI subscriptions, developers using o1 through the API will pay three times as much as they pay for GPT-4o—$15 per 1 million input tokens in o1, versus $5 for GPT-4o. The new model also won’t be most users’ first pick for more language-heavy tasks, where GPT-4o continues to be the better option, according to OpenAI’s user surveys. 

What will it unlock? We won’t know until researchers and labs have the access, time, and budget to tinker with the new mode and find its limits. But it’s surely a sign that the race for models that can outreason humans has begun. 

Now read the rest of The Algorithm


Deeper learning

Chatbots can persuade people to stop believing in conspiracy theories

Researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity. 

Why this matters: The findings could represent an important step forward in how we engage with and educate people who espouse such baseless theories, says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI’s impacts on society. “They show that with the help of large language models, we can—I wouldn’t say solve it, but we can at least mitigate this problem,” he says. “It points out a way to make society better.” Read more from Rhiannon Williams here.

Bits and bytes

Google’s new tool lets large language models fact-check their responses

Called DataGemma, it uses two methods to help LLMs check their responses against reliable data and cite their sources more transparently to users. (MIT Technology Review)

Meet the radio-obsessed civilian shaping Ukraine’s drone defense 

Since Russia’s invasion, Serhii “Flash” Beskrestnov has become an influential, if sometimes controversial, force—sharing expert advice and intel on the ever-evolving technology that’s taken over the skies. His work may determine the future of Ukraine, and wars far beyond it. (MIT Technology Review)

Tech companies have joined a White House commitment to prevent AI-generated sexual abuse imagery

The pledges, signed by firms like OpenAI, Anthropic, and Microsoft, aim to “curb the creation of image-based sexual abuse.” The companies promise to set limits on what models will generate and to remove nude images from training data sets where possible.  (Fortune)

OpenAI is now valued at $150 billion

The valuation arose out of talks it’s currently engaged in to raise $6.5 billion. Given that OpenAI is becoming increasingly costly to operate, and could lose as much as $5 billion this year, it’s tricky to see how it all adds up. (The Information)

Google is funding an AI-powered satellite constellation that will spot wildfires faster

16 September 2024 at 15:00

Early next year, Google and its partners plan to launch the first in a series of satellites that together would provide close-up, frequently refreshed images of wildfires around the world, offering data that could help firefighters battle blazes more rapidly, effectively, and safely.

The online search giant’s nonprofit and research arms have collaborated with the Moore Foundation, the Environmental Defense Fund, the satellite company Muon Space, and others to deploy 52 satellites equipped with custom-developed sensors over the coming years. 

The FireSat satellites will be able to spot fires as small as 5 by 5 meters (16 by 16 feet) on any speck of the globe. Once the full constellation is in place, the system should be capable of updating those images about every 20 minutes, the group says.

Those capabilities together would mark a significant upgrade over what’s available from the satellites that currently provide data to fire agencies. Generally, they can provide either high-resolution images that aren’t updated rapidly enough to track fires closely or frequently refreshed images that are relatively low-resolution.

The Earth Fire Alliance collaboration will also leverage Google’s AI wildfire tools, which have been trained to detect early indications of wildfires and track their progression, to draw additional insights from the data.

The images and analysis will be provided free to fire agencies around the world, helping to improve understanding of where fires are, where they’re moving, and how hot they’re burning. The information could help agencies stamp out small fires before they turn into raging infernos, place limited firefighting resources where they’ll do the most good, and evacuate people along the safest paths.

“In the satellite image of the Earth, a lot of things can be mistaken for a fire: a glint, a hot roof, smoke from another fire,” says Chris Van Arsdale, climate and energy research lead at Google Research and chairman of the Earth Fire Alliance. “Detecting fires becomes a game of looking for needles in a world of haystacks. Solving this will enable first responders to act quickly and precisely when a fire is detected.”

Some details of FireSat were unveiled earlier this year. But the organizations involved will announce additional information about their plans today, including the news that Google.org, the company’s charitable arm, has provided $13 million to the program and that the inaugural launch is scheduled to occur next year. 

Reducing the fog of war

The news comes as large fires rage across millions of acres in the western US, putting people and property at risk. The blazes include the Line Fire in Southern California, the Shoe Fly Fire in central Oregon, and the Davis Fire south of Reno, Nevada.

Wildfires have become more frequent, extreme, and dangerous in recent decades. That, in part, is a consequence of climate change: Rising temperatures suck the moisture from trees, shrubs, and grasses. But fires increasingly contribute to global warming as well. A recent study found that the fires that scorched millions of acres across Canada last year pumped out 3 billion tons of carbon dioxide, four times the annual pollution produced by the airline industry.

treeline with raging fire and sky botted out with smoke
GOOGLE

Humans have also increased fire risk by suppressing natural fires for decades, which has allowed fuel to build up in forests and grasslands, and by constructing communities on the edge of wilderness boundaries without appropriate rules, materials, and safeguards

Observers say that FireSat could play an important role in combating fires, both by enabling fire agencies to extinguish small ones before they grow into large ones and by informing effective strategies for battling them once they’re crossed that point.

“What these satellites will do is reduce the fog of war,” says Michael Wara, director of the climate and energy policy program at Stanford University’s Woods Institute for the Environment, who is focused on fire policy issues. “Like when a situation is really dynamic and very dangerous for firefighters and they’re trying to make decisions very quickly about whether to move in to defend structures or try to evacuate people.” 

(Wara serves on the advisory board of the Moore Foundation’s Wildfire Resilience Initiative.)

Some areas, like California, already have greater visibility into the current state of fires or early signs of outbreaks, thanks to technology like Department of Defense satellites, remote camera networks, and planes, helicopters, and drones. But FireSat will be especially helpful for “countries that have less-well-resourced wildland fighting capability,” Wara adds.

Better images, more data, and AI will not be able to fully counter the increased fire dangers. Wara and other fire experts argue that regions need to use prescribed burns and other efforts to more aggressively reduce the buildup of fuel, rethink where and how we build communities in fire-prone areas, and do more to fund and support the work of firefighters on the ground. 

Sounding an earlier alarm for fires will only help reduce dangers when regions have, or develop, the added firefighting resources needed to combat the most dangerous ones quickly and effectively. Communities will also need to put in place better policies to determine what types of fires should be left to burn, and under what conditions.

‘A game changer’

Kate Dargan Marquis, a senior wildfire advisor to the Moore Foundation who previously served as state fire marshal for California, says she can “personally attest” to the difference that such tools will make to firefighters in the field.

“It is a game changer, especially as wildfires are becoming more extreme, more frequent, and more dangerous for everyone,” she says. “Information like this will make a lifesaving difference for firefighters and communities around the globe.”

Kate Dargan Marquis, senior advisor, Moore Foundation.
GOOGLE

Google Research developed the sensors for the satellite and tested them as well as the company’s AI fire detection models by conducting flights over controlled burns in California. Google intends to work with Earth Fire Alliance “to ensure AI can help make this data as useful as possible, and also that wildfire information is shared as widely as possible,” the company said.

Google’s Van Arsdale says that providing visual images of every incident around the world from start to finish will be enormously valuable to scientists studying wildfires and climate change. 

“We can combine this data with Google’s existing models of the Earth to help advance our understanding of fire behavior and fire dynamics across all of Earth’s ecosystems,” he says. “All this together really has the potential to help mitigate the environmental and social impact of fire while also improving people’s health and safety.”

Specifically, it could improve assessments of fire risk, as well as our understanding of the most effective means of preventing or slowing the spread of fires. For instance, it could help communities determine where it would be most cost-effective to remove trees and underbrush. 

Figuring out the best ways to conduct such interventions is another key goal of the program, given their high cost and the limited funds available for managing wildlands, says Genny Biggs, the program director for the Moore Foundation’s Wildfire Resilience Initiative.

The launch

The idea for FireSat grew out of a series of meetings that began with a 2019 workshop hosted by the Moore Foundation, which provided the first philanthropic funding for the program. 

The first satellite, scheduled to be launched aboard a SpaceX rocket early next year, will be fully functional aside from some data transmission features. The goals of the “protoflight” mission include testing the onboard systems and the data they send back. The Earth Fire Alliance will work with a handful of early-adopter agencies to prepare for the next phases. 

The group intends to launch three fully operational satellites in 2026, with additional deployments in the years that follow. Muon Space will build and operate the satellites. 

Agencies around the world should be able to receive hourly wildfire updates once about half of the constellation is operational, says Brian Collins, executive director of the Earth Fire Alliance. It hopes to launch all 52 satellites by around the end of this decade.

Each satellite is designed to last about five years, so the organization will eventually need to deploy 10 more each year to maintain the constellation.

The Earth Fire Alliance has secured about two-thirds of the funding it needs for the first phase of the program, which includes the first four launches. The organization will need to raise additional money from government agencies, international organizations, philanthropies, and other groups  to deploy, maintain, and operate the full constellation. It estimates the total cost will exceed $400 million, which Collins notes “is 1/1000th of the economic losses due to extreme wildfires annually in the US alone.”

Asked if commercial uses of the data could also support the program, including potentially military ones, Collins said in an email: “Adjacent applications range from land use management and agriculture to risk management and industrial impact and mitigation.” 

“At the same time, we know that as large agencies and government agencies adopt FireSat data to support a broad public safety mandate, they may develop all-hazard, emergenc[y] management, and security related uses of data,” he added. “As long as opportunities are in balance with our charter to advance a global approach to wildfire and climate resilience, we welcome new ideas and applications of our data.”

‘Living with fire’

A wide variety of startups have emerged in recent years promising to use technology to reduce the frequency and severity of wildfires—for example, by installing cameras and sensors in forests and grasslands, developing robots to carry out controlled burns, deploying autonomous helicopters that can drop suppressant, and harnessing AI to predict wildfire behavior and inform forest and fire management strategies

So far, even with all these new tools, it’s still been difficult for communities to keep pace with the rising dangers.

Dargan Marquis—who founded her own wildfire software company, Intterra—says she is confident the incidence of disastrous fires can be meaningfully reduced with programs like FireSat, along with other improved technologies and policies. But she says it’s likely to take decades to catch up with the growing risks, as the world continues warming up.

“We’re going to struggle in places like California, these Mediterranean climates around the world, while our technology and our capabilities and our inventions, etc., catch up with that level of the problem,” she says. 

“We can turn that corner,” she adds. “If we work together on a comprehensive strategy with the right data and a convincing plan over the next 50 years, I do think that by the end of the century, we absolutely can be living with fire.”

The Download: an AI safety hotline, and tech for farmers

16 September 2024 at 14:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why we need an AI safety hotline

—Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Existing measures to mitigate AI risks aren’t enough to protect us, so we need new approaches. One could be a kind of AI safety hotline tasked with expert volunteers. Read more about how the hotline could work.

African farmers are using private satellite data to improve crop yields

In many developing countries, farming is impaired by lack of data. For centuries, farmers relied on native intelligence rooted in experience and hope.

Now, farmers in Africa are turning to technology to avoid cycles of heavy crop losses that could spell financial disaster. They’re partnering with EOS Data Analytics, a California-based provider of satellite imagery and data for precision farming, which allows them to track where or when specific spots needed attention on various farms—and even to anticipate weather warnings. Read the full story.

—Orji Sunday

This piece is from the latest print issue of MIT Technology Review, which is celebrating 125 years of the magazine! If you don’t already, subscribe now to get 25% off future copies once they land.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 TikTok is heading to court today 
An appeals court will hear whether the app should be banned in the US. (The Verge)
+ The app is fighting the potential ban, which would kick in early next year. (NYT $)
+ The depressing truth behind US attempts to ban TikTok. (MIT Technology Review)

2 China has made a chipmaking equipment breakthrough
A new machine should lessen its reliance on suppliers sanctioned by the US. (Bloomberg $)
+ What’s next in chips. (MIT Technology Review)

3 OpenAI’s newest AI models could be used to create bioweapons
The company itself has given its latest releases its highest safety warning to date. (FT $)
+ In theory, it could aid experts with reproducing a biological threat. (Vox)
+ To avoid AI doom, learn from nuclear safety. (MIT Technology Review)

4 Big Tech’s carbon footprint is likely way bigger than they say
Like, 662% bigger. (The Guardian)
+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review)

5 Donald Trump is launching a new crypto business
He’s set to launch it during a livestream today. (NYT $)

6 SpaceX’s private space mission has touched down safety on Earth
The first commercial mission was a big success—and opened the doors for future non-government projects. (BBC)
+ The crew reached a higher altitude than any human has traveled in 50 years. (CNN)

7 We need to stop building in the ocean
It’s severely affecting the way marine life navigates. (The Atlantic $)

8 We’re still learning about the benefits of breast milk 
Its antimicrobial properties could help to treat cancer and other conditions. (Economist $)
+ Startups are racing to reproduce breast milk in the lab. (MIT Technology Review)

9 What the future of food holds
From robot chefs to healthier potatoes. (WSJ $)
+ Robot-packed meals are coming to the frozen-food aisle. (MIT Technology Review)

10 Meet Silicon Valley’s seriously pampered pets 🐕
Pet tech doesn’t come cheap. (The Information $)

Quote of the day

“Welcome back to planet Earth.”

—The host of the live SpaceX broadcast tracking the return of the company’s Polaris Dawn crew greets their homecoming after five days in orbit, the Washington Post reports.

The big story

The future of open source is still very much in flux

August 2023

When Xerox donated a new laser printer to MIT in 1980, the company couldn’t have known that the machine would ignite a revolution.

While the early decades of software development generally ran on a culture of open access, this new printer ran on inaccessible proprietary software, much to the horror of Richard M. Stallman, then a 27-year-old programmer at the university.

A few years later, Stallman released GNU, an operating system designed to be a free alternative to one of the dominant operating systems at the time: Unix. The free-software movement was born, with a simple premise: for the good of the world, all code should be open.

Forty years later, tech companies are making billions on proprietary software, and much of the technology around us is inscrutable. But while Stallman’s movement may look like a failed experiment, the free and open-source software movement is not only alive and well; it has become a keystone of the tech industry. Read the full story.

—Rebecca Ackermann

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The Hotel California solo series continues, this time with a very intense recorder edition (thanks Niall!)
+ Avocados should be extinct!? Say it ain’t so!
+ This quick and easy peach cobbler recipe looks like an absolute treat. 
+ Nothing but love for Moo Deng the tiny viral baby hippo. 🦛

Why we need an AI safety hotline

16 September 2024 at 11:00

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

How to sound the alarm

In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

Lowering the stakes

What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

Studying examples elsewhere

The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

The Download: conspiracy-debunking chatbots, and fact-checking AI

13 September 2024 at 14:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Chatbots can persuade people to stop believing in conspiracy theories

The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and even leading to unnecessary deaths.

Now, researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity

The findings could represent an important step forward in how we engage with and educate people who espouse baseless theories. Read the full story.

—Rhiannon Williams

Google’s new tool lets large language models fact-check their responses

The news: Google is releasing a tool called DataGemma that it hopes will help to reduce problems caused by AI ‘hallucinating’, or making incorrect claims. It uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users. 

What next: If it works as hoped, it could be a real boon for Google’s plan to embed AI deeper into its search engine. But it comes with a host of caveats. Read the full story.

—James O’Donnell

Neuroscientists and architects are using this enormous laboratory to make buildings better

Have you ever found yourself lost in a building that felt impossible to navigate? Thoughtful building design should center on the people who will be using those buildings. But that’s no mean feat.

A design that works for some people might not work for others. People have different minds and bodies, and varying wants and needs. So how can we factor them all in?

To answer that question, neuroscientists and architects are joining forces at an enormous laboratory in East London—one that allows researchers to build simulated worlds. Read the full story.

—Jessica Hamzelou

This story is from The Checkup, our weekly biotech and health newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 OpenAI has released an AI model with ‘reasoning’ capabilities
It claims it’s a step toward its broader goal of human-like artificial intelligence. (The Verge)
+ It could prove particularly useful for coders and math tutors. (NYT $)
+ Why does AI being good at math matter? (MIT Technology Review)

2 Microsoft wants to lead the way in climate innovation
While simultaneously selling AI to fossil fuel companies. (The Atlantic $)
+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review)

3 The FDA has approved Apple’s AirPods as hearing aids
Just two years after the body first approved over-the-counter aids. (WP $)
+ It could fundamentally shift how people access hearing-enhancing devices. (The Verge)

4 Parents aren’t using Meta’s child safety controls 
So claims Nick Clegg, the company’s global affairs chief. (The Guardian)
+ Many tech execs restrict their own childrens’ exposure to technology. (The Atlantic $)

5 How AI is turbo boosting legal action
Especially when it comes to mass litigation. (FT $)

6 Low-income Americans were targeted by false ads for free cash
Some victims had their health insurance plans changed without their consent. (WSJ $)

7 Inside the stratospheric rise of the ‘medical’ beverage
Promising us everything from glowier skin to increased energy. (Vox)

8 Japan’s police force is heading online
Cybercrime is booming, as criminal activity in the real world drops. (Bloomberg $)

9 AI can replicate your late loved ones’ handwriting ✍
For some, it’s a touching reminder of someone they loved. (Ars Technica)
+ Technology that lets us “speak” to our dead relatives has arrived. Are we ready? (MIT Technology Review)

10 Crypto creators are resorting to dangerous stunts for attention
Don’t try this at home. (Wired $)

Quote of the day

“You can’t have a conversation with James the AI bot. He’s not going to show up at events.”

—A former reporter for Garden Island, a local newspaper in Hawaii, dismisses the company’s decision to invest in new AI-generated presenters for its website, Wired reports.

The big story

AI hype is built on high test scores. Those tests are flawed.

August 2023

In the past few years, multiple researchers claim to have shown that large language models can pass cognitive tests designed for humans, from working through problems step by step, to guessing what other people are thinking.

These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs. But there’s a problem: There’s little agreement on what those results really mean. Read the full story.
 
—William Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ It’s almost time for Chinese mooncake madness to celebrate the Moon Festival! 🥮
+ Pearl the Wonder Horse isn’t just a therapy animal—she’s also an accomplished keyboardist.
+ We love you Peter Dinklage!
+ Money for Nothing sounds even better on a lute.

Neuroscientists and architects are using this enormous laboratory to make buildings better

13 September 2024 at 11:00

Have you ever found yourself lost in a building that felt impossible to navigate? Thoughtful building design should center on the people who will be using those buildings. But that’s no mean feat.

It’s not just about navigation, either. Just think of an office that left you feeling sleepy or unproductive, or perhaps a health center that had a less-than-reviving atmosphere. A design that works for some people might not work for others. People have different minds and bodies, and varying wants and needs. So how can we factor them all in?

To answer that question, neuroscientists and architects are joining forces at an enormous laboratory in East London—one that allows researchers to build simulated worlds. In this lab, scientists can control light, temperature, and sound. They can create the illusion of a foggy night, or the tinkle of morning birdsong.

And they can study how volunteers respond to these environments, whether they be simulations of grocery stores, hospitals, pedestrian crossings, or schools. That’s how I found myself wandering around a fake art gallery, wearing a modified baseball cap with a sensor that tracked my movements.

I first visited the Person-Environment-Activity Research Lab, referred to as PEARL, back in July. I’d been chatting to Hugo Spiers, a neuroscientist based at University College London, about the use of video games to study how people navigate. Spiers had told me he was working on another project: exploring how people navigate a lifelike environment, and how they respond during evacuations (which, depending on the situation, could be a matter of life or death).

For their research, Spiers and his colleagues set up what they call a “mocked-up art gallery” within PEARL. The center in its entirety is pretty huge as labs go, measuring around 100 meters in length and 40 meters across, with 10-meter-high ceilings in places. There’s no other research center in the world like this, Spiers told me.

The gallery setup looked a little like a maze from above, with a pathway created out of hanging black sheets. The exhibits themselves were videos of dramatic artworks that had been created by UCL students.

When I visited in July, Spiers and his colleagues were running a small pilot study to trial their setup. As a volunteer participant, I was handed a numbered black cap with a square board on top, marked with a large QR code. This code would be tracked by cameras above and around the gallery. The cap also carried a sensor, transmitting radio signals to devices around the maze that could pinpoint my location within a range of 15 centimeters.

At first, all the volunteers (most of whom seemed to be students) were asked to explore the gallery as we would any other. I meandered around, watching the videos, and eavesdropping on the other volunteers, who were chatting about their research and upcoming dissertation deadlines. It all felt pretty pleasant and calm.

That feeling dissipated in the second part of the experiment, when we were each given a list of numbers, told that each one referred to a numbered screen, and informed that we had to visit all the screens in the order in which they appeared on our lists. “Good luck, everybody,” Spiers said.

Suddenly everyone seemed to be rushing around, slipping past each other and trying to move quickly while avoiding collisions. “It’s all got a bit frantic, hasn’t it?” I heard one volunteer comment as I accidentally bumped into another. I hadn’t managed to complete the task by the time Spiers told us the experiment was over. As I walked to the exit, I noticed that some people were visibly out of breath.

The full study took place on Wednesday, September 11. This time, there were around 100 volunteers (I wasn’t one of them). And while almost everyone was wearing a modified baseball cap, some had more complicated gear, including EEG caps to measure brainwaves, or caps that use near-infrared spectroscopy to measure blood flow in the brain. Some people were even wearing eye-tracking devices that monitored which direction they were looking.

“We will do something quite remarkable today,” Spiers told the volunteers, staff, and observers as the experiment started. Taking such detailed measurements from so many individuals in such a setting represented “a world first,” he said.

I have to say that being an observer was much more fun than being a participant. Gone was the stress of remembering instructions and speeding around a maze. Here in my seat, I could watch as the data collected from the cameras and sensors was projected onto a screen. The volunteers, represented as squiggly colored lines, made their way through the gallery in a way that reminded me of the game Snake.

The study itself was similar to the pilot study, although this time the volunteers were given additional tasks. At one point, they were given an envelope with the name of a town or city in it, and asked to find others in the group who had been given the same one. It was fascinating to see the groups form. Some had the names of destination cities like Bangkok, while others had been assigned fairly nondescript English towns like Slough, made famous as the setting of the British television series The Office. At another point, the volunteers were asked to evacuate the gallery from the nearest exit.

The data collected in this study represents something of a treasure trove for researchers like Spiers and his colleagues. The team is hoping to learn more about how people navigate a space, and whether they move differently if they are alone or in a group. How do friends and strangers interact, and does this depend on whether they have certain types of material to bond over? How do people respond to evacuations—will they take the nearest exit as directed, or will they run on autopilot to the exit they used to enter the space in the first place?

All this information is valuable to neuroscientists like Spiers, but it’s also useful to architects like his colleague Fiona Zisch, who is based at UCL’s Bartlett School of Architecture. “We do really care about how people feel about the places we design for them,” Zisch tells me. The findings can guide not only the construction of new buildings, but also efforts to modify and redesign existing ones.

PEARL was built in 2021 and has already been used to help engineers, scientists, and architects explore how neurodivergent people use grocery stores, and the ideal lighting to use for pedestrian crossings, for example. Zisch herself is passionate about creating equitable spaces—particularly for health and education—that everyone can make use of in the best possible way.

In the past, models used in architecture have been developed with typically built, able-bodied men in mind. “But not everyone is a 6’2″ male with a briefcase,” Zisch tells me. Age, gender, height, and a range of physical and psychological factors can all influence how a person will use a building. “We want to improve not just the space, but the experience of the space,” says Zisch. Good architecture isn’t just about creating stunning features; it’s about subtle adaptations that might not even be noticeable to most people, she says.

The art gallery study is just the first step for researchers like Zisch and Spiers, who plan to explore other aspects of neuroscience and architecture in more simulated environments at PEARL. The team won’t have results for a while yet. But it’s a fascinating start. Watch this space.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Brain-monitoring technology has come a long way, and tech designed to read our minds and probe our memories is already being used. Futurist and legal ethicist Nita Farahany explained why we need laws to protect our cognitive liberty in a previous edition of The Checkup.

Listening in on the brain can reveal surprising insights into how this mysterious organ works. One team of neuroscientists found that our brains seem to oscillate between states of order and chaos.

Last year, MIT Technology Review published our design issue of the magazine. If you’re curious, this piece on the history and future of the word “design,” by Nicholas de Monchaux, head of architecture at MIT, might be a good place to start

Design covers much more than buildings, of course. Designers are creating new ways for users of prosthetic devices to feel more comfortable in their own skin—some of which have third thumbs, spikes, or “superhero skins.”

Achim Menges is an architect creating what he calls “self-shaping” structures with wood, which can twist and curve with changes in humidity. His approach is a low-energy way to make complex curved architectures, Menges told John Wiegand.

From around the web

Scientists are meant to destroy research samples of the poliovirus, as part of efforts to eradicate the disease it causes. But lab leaks of the virus may be more common than we’d like to think. (Science)

Neurofeedback allows people to watch their own brain activity in real time, and learn to control it. It could be a useful way to combat the impacts of stress. (Trends in Neurosciences)

Microbes, some of which cause disease in people, can travel over a thousand miles on wind, researchers have shown. Some appear to be able to survive their journey. (The Guardian)

Is the X chromosome involved in Alzheimer’s disease? A study of over a million people suggests so. (JAMA Neurology)

A growing number of men are paying thousands of dollars a year for testosterone therapies that are meant to improve their physical performance. But some are left with enlarged breasts, shrunken testicles, blood clots, and infertility. (The Wall Street Journal)

Chatbots can persuade people to stop believing in conspiracy theories

12 September 2024 at 20:00

The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and even leading to unnecessary deaths.

Now, researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity. The research is published today in the journal Science.

The findings could represent an important step forward in how we engage with and educate people who espouse such baseless theories, says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI’s impacts on society.

“They show that with the help of large language models, we can—I wouldn’t say solve it, but we can at least mitigate this problem,” he says. “It points out a way to make society better.” 

Few interventions have been proven to change conspiracy theorists’ minds, says Thomas Costello, a research affiliate at MIT Sloan and the lead author of the study. Part of what makes it so hard is that different people tend to latch on to different parts of a theory. This means that while presenting certain bits of factual evidence may work on one believer, there’s no guarantee that it’ll prove effective on another.

That’s where AI models come in, he says. “They have access to a ton of information across diverse topics, and they’ve been trained on the internet. Because of that, they have the ability to tailor factual counterarguments to particular conspiracy theories that people believe.”

The team tested its method by asking 2,190 crowdsourced workers to participate in text conversations with GPT-4 Turbo, OpenAI’s latest large language model.

Participants were asked to share details about a conspiracy theory they found credible, why they found it compelling, and any evidence they felt supported it. These answers were used to tailor responses from the chatbot, which the researchers had prompted to be as persuasive as possible.

Participants were also asked to indicate how confident they were that their conspiracy theory was true, on a scale from 0 (definitely false) to 100 (definitely true), and then rate how important the theory was to their understanding of the world. Afterwards, they entered into three rounds of conversation with the AI bot. The researchers chose three to make sure they could collect enough substantive dialogue.

After each conversation, participants were asked the same rating questions. The researchers followed up with all the participants 10 days after the experiment, and then two months later, to assess whether their views had changed following the conversation with the AI bot. The participants reported a 20% reduction of belief in their chosen conspiracy theory on average, suggesting that talking to the bot had fundamentally changed some people’s minds.

“Even in a lab setting, 20% is a large effect on changing people’s beliefs,” says Zhang. “It might be weaker in the real world, but even 10% or 5% would still be very substantial.”

The authors sought to safeguard against AI models’ tendency to make up information—known as hallucinating—by employing a professional fact-checker to evaluate the accuracy of 128 claims the AI had made. Of these, 99.2% were found to be true, while 0.8% were deemed misleading. None were found to be completely false. 

One explanation for this high degree of accuracy is that a lot has been written about conspiracy theories on the internet, making them very well represented in the model’s training data, says David G. Rand, a professor at MIT Sloan who also worked on the project. The adaptable nature of GPT-4 Turbo means it could easily be connected to different platforms for users to interact with in the future, he adds.

“You could imagine just going to conspiracy forums and inviting people to do their own research by debating the chatbot,” he says. “Similarly, social media could be hooked up to LLMs to post corrective responses to people sharing conspiracy theories, or we could buy Google search ads against conspiracy-related search terms like ‘Deep State.’”

The research upended the authors’ preconceived notions about how receptive people were to solid evidence debunking not only conspiracy theories, but also other beliefs that are not rooted in good-quality information, says Gordon Pennycook, an associate professor at Cornell University who also worked on the project. 

“People were remarkably responsive to evidence. And that’s really important,” he says. “Evidence does matter.”

Google’s new tool lets large language models fact-check their responses

12 September 2024 at 15:00

As long as chatbots have been around, they have made things up. Such “hallucinations” are an inherent part of how AI models work. However, they’re a big problem for companies betting big on AI, like Google, because they make the responses it generates unreliable. 

Google is releasing a tool today to address the issue. Called DataGemma, it uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users. 

The first of the two methods is called Retrieval-Interleaved Generation (RIG), which acts as a sort of fact-checker. If a user prompts the model with a question—like “Has the use of renewable energy sources increased in the world?”—the model will come up with a “first draft” answer. Then RIG identifies what portions of the draft answer could be checked against Google’s Data Commons, a massive repository of data and statistics from reliable sources like the United Nations or the Centers for Disease Control and Prevention. Next, it runs those checks and replaces any incorrect original guesses with correct facts. It also cites its sources to the user.

The second method, which is commonly used in other large language models, is called Retrieval-Augmented Generation (RAG). Consider a prompt like “What progress has Pakistan made against global health goals?” In response, the model examines which data in the Data Commons could help it answer the question, such as information about access to safe drinking water, hepatitis B immunizations, and life expectancies. With those figures in hand, the model then builds its answer on top of the data and cites its sources.

“Our goal here was to use Data Commons to enhance the reasoning of LLMs by grounding them in real-world statistical data that you could source back to where you got it from,” says Prem Ramaswami, head of Data Commons at Google. Doing so, he says, will “create more trustable, reliable AI.”

It is only available to researchers for now, but Ramaswami says access could widen further after more testing. If it works as hoped, it could be a real boon for Google’s plan to embed AI deeper into its search engine.  

However, it comes with a host of caveats. First, the usefulness of the methods is limited by whether the relevant data is in the Data Commons, which is more of a data repository than an encyclopedia. It can tell you the GDP of Iran, but it’s unable to confirm the date of the First Battle of Fallujah or when Taylor Swift released her most recent single. In fact, Google’s researchers found that with about 75% of the test questions, the RIG method was unable to obtain any usable data from the Data Commons. And even if helpful data is indeed housed in the Data Commons, the model doesn’t always formulate the right questions to find it. 

Second, there is the question of accuracy. When testing the RAG method, researchers found that the model gave incorrect answers 6% to 20% of the time. Meanwhile, the RIG method pulled the correct stat from Data Commons only about 58% of the time (though that’s a big improvement over the 5% to 17% accuracy rate of Google’s large language models when they’re not pinging Data Commons). 

Ramaswami says DataGemma’s accuracy will improve as it gets trained on more and more data. The initial version has been trained on only about 700 questions, and fine-tuning the model required his team to manually check each individual fact it generated. To further improve the model, the team plans to increase that data set from hundreds of questions to millions.

The Download: Ukraine’s drone defenses, and today’s climate heroes

12 September 2024 at 14:10

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meet the radio-obsessed civilian shaping Ukraine’s drone defense

Drones have come to define the brutal conflict in Ukraine that has now dragged on for more than two and a half years. And most rely on radio communications—a technology that Serhii “Flash” Beskrestnov has obsessed over since childhood.

While Flash is now a civilian, the former officer has still taken it upon himself to inform his country’s defense in all matters related to radio. Once a month, he studies the skies for Russian radio transmissions and tries to learn about the problems facing troops in the fields and in the trenches.

In this race for survival—as each side constantly tries to best the other, only to start all over again when the other inevitably catches up—Ukrainian soldiers need to develop creative solutions, and fast. As Ukraine’s wartime radio guru, Flash may just be one of their best hopes for doing that. Read the full story.

—Charlie Metcalfe

Meet 2024’s climate innovators under 35

One way to know where a field is going? Take a look at what the sharpest new innovators are working on.

Good news for all of us: MIT Technology Review’s list of 35 Innovators Under 35 just dropped. A decent number of the people who made the list are working in fields that touch climate and energy in one way or another. And our senior climate reporter Casey Crownhart noticed a few trends that might provide some hints about the future. Read the full story.

This year’s list is available exclusively to MIT Technology Review subscribers. If you’re not a subscriber already, you sign up here with a 25% discount on the usual price.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The first commercial spacewalk by private citizens is underway
And, thus far, it’s been a success. (CNN)
+ Take a look at the long and illustrious history of spacewalks. (BBC)

2 Silicon Valley is divided over California’s AI safety bill
Big Tech is waiting anxiously for the state’s governor to make a decision. (FT $)
+ What’s next for AI regulation? (MIT Technology Review)

3 Wildfires are raging across southern California
The state has weathered nearly three times as much acreage burn this year so far compared to the whole of 2023. (The Guardian)
+ Canada’s 2023 wildfires produced more emissions than fossil fuels in most countries. (MIT Technology Review)

4 Broken wind turbines have major repercussions
Multiple offshore wind projects have run into serious trouble. (NYT $)

5 The percentage of women in tech has hardly changed in 20 year
Women and people of color face an uphill battle to get hired. (WP $)
+ Why can’t tech fix its gender problem? (MIT Technology Review)

6 Google’s new app can turn your research into an AI podcast
Please don’t do this, though. (The Verge)

7 Human drivers keep crashing into Waymo robotaxis
The company has launched a new website to put the incidents into perspective.(Ars Technica)
+ What’s next for robotaxis in 2024. (MIT Technology Review)

8 This tiny SpaceX rival is poised to launch its first satellites
AST SpaceMobile’s star appears to be on the rise—but for how long?(Bloomberg $)

9 You’ve got a fax 📠
Pagers, fax machines and dumbphones are all the rage these days. (WSJ $)

10 Have we reached peak emoji? 😲
The little pictograms are an illustrative language, not an ideographic one. (The Atlantic $)

Quote of the day

“A beautiful world.”

—Billionaire businessman Jared Isaacman’s reaction as he saw Earth from space during the first privately funded spacewalk today, the BBC reports.

The big story

What does GPT-3 “know” about me?

August 2022

One of the biggest stories in tech is the rise of large language models that produce text that reads like a human might have written it.

These models’ power comes from being trained on troves of publicly available human-created text hoovered up from the internet. If you’ve posted anything even remotely personal in English on the internet, chances are your data might be part of some of the world’s most popular LLMs.

Melissa Heikkilä, MIT Technology Review’s senior AI reporter, wondered what data these models might have on her—and how it could be misused. So she put OpenAI’s GPT-3 to the test. Read about what she found.

In this section yesterday we stated that Amazon had acquired iRobot. This was incorrect—the acquisition never completed. We apologize for the error.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ These photos of London taken on a Casio camera watch are a snapshot of bygone times.
+ If you’ve noticed elaborate painted nails making their way into your cookbooks, it’s part of a wider trend. 💅
+ Painting Paint, now that’s meta.
+ Wow, enthusiastic skeletons are already limbering up for next month!

Meet the radio-obsessed civilian shaping Ukraine’s drone defense

12 September 2024 at 11:00

Serhii “Flash” Beskrestnov hates going to the front line. The risks terrify him. “I’m really not happy to do it at all,” he says. But to perform his particular self-appointed role in the Russia-Ukraine war, he believes it’s critical to exchange the relative safety of his suburban home north of the capital for places where the prospect of death is much more immediate. “From Kyiv,” he says, “nobody sees the real situation.”

So about once a month, he drives hundreds of kilometers east in a homemade mobile intelligence center: a black VW van in which stacks of radio hardware connect to an array of antennas on the roof that stand like porcupine quills when in use. Two small devices on the dash monitor for nearby drones. Over several days at a time, Flash studies the skies for Russian radio transmissions and tries to learn about the problems facing troops in the fields and in the trenches.

He is, at least in an unofficial capacity, a spy. But unlike other spies, Flash does not keep his work secret. In fact, he shares the results of these missions with more than 127,000 followers—including many soldiers and government officials—on several public social media channels. Earlier this year, for instance, he described how he had recorded five different Russian reconnaissance drones in a single night—one of which was flying directly above his van.

“Brothers from the Armed Forces of Ukraine, I am trying to inspire you,” he posted on his Facebook page in February, encouraging Ukrainian soldiers to learn how to recognize enemy drone signals as he does. “You will spread your wings, you will understand over time how to understand distance and, at some point, you will save the lives of dozens of your colleagues.”

Drones have come to define the brutal conflict that has now dragged on for more than two and a half years. And most rely on radio communications—a technology that Flash has obsessed over since childhood. So while Flash is now a civilian, the former officer has still taken it upon himself to inform his country’s defense in all matters related to radio.

As well as the frontline information he shares on his public channels, he runs a “support service” for almost 2,000 military communications specialists on Signal and writes guides for building anti-drone equipment on a tight budget. “He’s a celebrity,” one special forces officer recently shouted to me over the thump of music in a Kyiv techno club. He’s “like a ray of sun,” an aviation specialist in Ukraine’s army told me. Flash tells me that he gets 500 messages every day asking for help.

Despite this reputation among rank-and-file service members—and maybe because of it—Flash has also become a source of some controversy among the upper echelons of Ukraine’s military, he tells me. The Armed Forces of Ukraine declined multiple requests for comment, but Flash and his colleagues claim that some high-ranking officials perceive him as a security threat, worrying that he shares too much information and doesn’t do enough to secure sensitive intel. As a result, some refuse to support or engage with him. Others, Flash says, pretend he doesn’t exist. Either way, he believes they are simply insecure about the value of their own contributions—“because everybody knows that Serhii Flash is not sitting in Kyiv like a colonel in the Ministry of Defense,” he tells me in the abrasive fashion that I’ve come to learn is typical of his character. 

But above all else, hours of conversations with numerous people involved in Ukraine’s defense, including frontline signalmen and volunteers, have made clear that even if Flash is a complicated figure, he’s undoubtedly an influential one. His work has become greatly important to those fighting on the ground, and he recently received formal recognition from the military for his contributions to the fight, with two medals of commendation—one from the commander of Ukraine’s ground forces, the other from the Ministry of Defense. 

With a handheld directional antenna and a spectrum analyzer, Flash can scan for hostile signals.
EMRE ÇAYLAK

Despite a small number of semi-autonomous machines with a reduced reliance on radio communications, the drones that saturate the skies above the battlefield will continue to largely depend on this technology for the foreseeable future. And in this race for survival—as each side constantly tries to best the other, only to start all over again when the other inevitably catches up—Ukrainian soldiers need to develop creative solutions, and fast. As Ukraine’s wartime radio guru, Flash may just be one of their best hopes for doing that. 

“I know nothing about his background,” says “Igrok,” who works with drones in Ukraine’s 110th Mechanized Brigade and whom we are identifying by his call sign, as is standard military practice. “But I do know that most engineers and all pilots know nothing about radios and antennas. His job is definitely one of the most powerful forces keeping Ukraine’s aerial defense in good condition.”

And given the mounting evidence that both militaries and militant groups in other parts of the world are now adopting drone tactics developed in Ukraine, it’s not only his country’s fate that Flash may help to determine—but also the ways that armies wage war for years to come.

A prescient hobby

Before I can even start asking questions during our meeting in May, Flash is rummaging around in the back of the Flash-mobile, pulling out bits of gear for his own version of show-and-tell: a drone monitor with a fin-shaped antenna; a walkie-talkie labeled with a sticker from Russia’s state security service, the FSB; an approximately 1.5-meter-long foldable antenna that he says probably came from a US-made Abrams tank.

Flash has parked on a small wooded road beside the Kyiv Sea, an enormous water reservoir north of the capital. He’s wearing a khaki sweat-wicking polo shirt, combat trousers, and combat boots, with a Glock 19 pistol strapped to his hip. (“I am a threat to the enemy,” he tells me, explaining that he feels he has to watch his back.) As we talk, he moves from one side to the other, as if the electromagnetic waves that he’s studied since childhood have somehow begun to control the motion of his body.

Now 49, Flash grew up in a suburb of Kyiv in the ’80s. His father, who was a colonel in the Soviet army, recalls bringing home broken radio equipment for his preteen son to tinker with. Flash showed talent from the start. He attended an after-school radio club, and his father fixed an antenna to the roof of their apartment for him. Later, Flash began communicating with people in countries beyond the Iron Curtain. “It was like an open door to the big world for me,” he says.

Flash recalls with amusement a time when a letter from the KGB arrived at his family home, giving his father the fright of his life. His father didn’t know that his son had sent a message on a prohibited radio frequency, and someone had noticed. Following the letter, when Flash reported to the service’s office in downtown Kyiv, his teenage appearance confounded them. Boy, what are you doing here? Flash recalls an embarrassed official saying. 

Ukraine had been a hub of innovation as part of the Soviet Union. But by the time Flash graduated from military communications college in 1997, Ukraine had been independent for six years, and corruption and a lack of investment had stripped away the armed forces’ former grandeur. Flash spent just a year working in a military radio factory before he joined a private communications company developing Ukraine’s first mobile network, where he worked with technologies far more advanced than what he had used in the military. The  project was called “Flash.” 

A decade and a half later, Flash had risen through the ranks of the industry to become head of department at the progenitor to the telecommunications company Vodafone Ukraine. But boredom prompted him to leave and become an entrepreneur. His many projects included a successful e-commerce site for construction services and a popular video game called Isotopium: Chernobyl, which he and a friend based on the “really neat concept,” according to a PC Gamer review, of allowing players to control real robots (fitted with radios, of course) around a physical arena. Released in 2019, it also received positive reviews from Reuters and BBC News.

But within just a few years, an unexpected attack would hurl his country into chaos—and upend Flash’s life. 

“I am here to help you with technical issues,” Flash remembers writing to his Signal group when he first started offering advice. “Ask me anything and I will try to find the answer for you.”
EMRE ÇAYLAK

By early 2022, rumors were growing of a potential attack from Russia. Though he was still working on Isotopium, Flash began to organize a radio network across the northern suburbs of Kyiv in preparation. Near his home, he set up a repeater about 65 meters above ground level that could receive and then rebroadcast transmissions from all the radios in its network across a 200-square-kilometer area. Another radio amateur programmed and distributed handheld radios.

When Russian forces did invade, on February 24, they took both fiber-optic and mobile networks offline, as Flash had anticipated. The radio network became the only means of instant communications for civilians and, critically, volunteers mobilizing to fight in the region, who used it to share information about Russian troop movements. Flash fed this intel to several professional Ukrainian army units, including a unit of special reconnaissance forces. He later received an award from the head of the district’s military administration for his part in Kyiv’s defense. The head of the district council referred to Flash as “one of the most worthy people” in the region.

Yet it was another of Flash’s projects that would earn him renown across Ukraine’s military.

Despite being more than 100 years old, radio technology is still critical in almost all aspects of modern warfare, from secure communications to satellite-guided missiles. But the decline of Ukraine’s military, coupled with the movement of many of the country’s young techies into lucrative careers in the growing software industry, created a vacuum of expertise. Flash leaped in to fill it.

Within roughly a month of Russia’s incursion, Flash had created a private group called “Military Signalmen” on the encrypted messaging platform Signal, and invited civilian radio experts from his personal network to join alongside military communications specialists. “I am here to help you with technical issues,” he remembers writing to the group. “Ask me anything and I will try to find the answer for you.”

The kinds of questions that Flash and his civilian colleagues answered in the first months were often basic. Group members wanted to know how to update the firmware on their devices, reset their radios’ passwords, or set up the internal communications networks for large vehicles. Many of the people drafted as communications specialists in the Ukrainian military had little relevant experience; Flash claims that even professional soldiers lacked appropriate training and has referred to large parts of Ukraine’s military communications courses as “either nonsense or junk.” (The Korolov Zhytomyr Military Institute, where many communications specialists train, declined a request for comment.)

After Russia’s invasion of Ukraine, Flash transformed his VW van into a mobile radio intelligence center.
EMRE ÇAYLAK

He demonstrates handheld spectrum analyzers with custom Ukrainian firmware.

News of the Signal group spread by word of mouth, and it soon became a kind of 24-hour support service that communications specialists in every sector of Ukraine’s frontline force subscribed to. “Any military engineer can ask anything and receive the answer within a couple of minutes,” Flash says. “It’s a nice way to teach people very quickly.” 

As the war progressed into its second year, Military Signalmen became, to an extent, self-sustaining. Its members had learned enough to answer one another’s questions themselves. And this is where several members tell me that Flash has contributed the most value. “The most important thing is that he brought together all these communications specialists in one team,” says Oleksandr “Moto,” a technician at an EU mission in Kyiv and an expert in Motorola equipment, who has advised members of the group. (He asked to not be identified by his surname, due to security concerns.) “It became very efficient.”

Today, Flash and his partners continue to answer occasional questions that require more advanced knowledge. But over the past year, as the group demanded less of his time, Flash has begun to focus on a rapidly proliferating weapon for which his experience had prepared him almost perfectly: the drone.  

A race without end

The Joker-10 drone, one of Russia’s latest additions to its arsenal, is equipped with a hibernation mechanism, Flash warned his Facebook followers in March. This feature allows the operator to fly it to a hidden location, leave it there undetected, and then awaken it when it’s time to attack. “It is impossible to detect the drone using radio-electronic means,” Flash wrote. “If you twist and turn it in your hands—it will explode.” 

This is just one example of the frequent developments in drone engineering that Ukrainian and Russian troops are adapting to every day. 

Larger strike drones similar to the US-made Reaper have been familiar in other recent conflicts, but sophisticated air defenses have rendered them less dominant in this war. Ukraine and Russia are developing and deploying vast numbers of other types of drones—including the now-notorious “FPV,” or first-person view, drone that pilots operate by wearing goggles that stream video of its perspective. These drones, which can carry payloads large enough to destroy tanks, are cheap (costing as little as $400), easy to produce, and difficult to shoot down. They use direct radio communications to transmit video feeds, receive commands, and navigate.

""
A Ukrainian soldier prepares an FPV drone equipped with dummy ammunition for a simulated flight operation.
MARCO CORDONE/SOPA IMAGES/SIPA USA VIA AP IMAGES

But their reliance on radio technology is a major vulnerability, because enemies can disrupt the signals that the drones emit—making them far less effective, if not inoperable. This form of electronic warfare—which most often involves emitting a more powerful signal at the same frequency as the operator’s—is called “jamming.”

Jamming, though, is an imperfect solution. Like drones, jammers themselves emit radio signals that can enable enemies to locate them. There are also effective countermeasures to bypass jammers. For example, a drone operator can use a tactic called “frequency hopping,” rapidly jumping between different frequencies to avoid a jammer’s signal. But even this method can be disrupted by algorithms that calculate the hopping patterns.

For this reason, jamming is a frequent focus of Flash’s work. In a January post on his Telegram channel, for instance, which people viewed 48,000 times, Flash explained how jammers used by some Ukrainian tanks were actually disrupting their own communications. “The cause of the problems is not direct interference with the reception range of the radio station, but very powerful signals from several [electronic warfare] antennae,” he wrote, suggesting that other tank crews experiencing the same problem might try spreading their antennas across the body of the tank. 

It is all part of an existential race in which Russia and Ukraine are constantly hunting for new methods of drone operation, drone jamming, and counter-jamming—and there’s no end in sight. In March, for example, Flash says, a frontline contact sent him photos of a Russian drone with what looks like a 10-kilometer-long spool of fiber-optic cable attached to its rear—one particularly novel method to bypass Ukrainian jammers. “It’s really crazy,” Flash says. “It looks really strange, but Russia showed us that this was possible.”

Flash’s trips to the front line make it easier for him to track developments like this. Not only does he monitor Russian drone activity from his souped-up VW, but he can study the problems that soldiers face in situ and nurture relationships with people who may later send him useful intel—or even enemy equipment they’ve seized. “The main problem is that our generals are located in Kyiv,” Flash says. “They send some messages to the military but do not understand how these military people are fighting on the front.”

Besides the advice he provides to Ukrainian troops, Flash also publishes online his own manuals for building and operating equipment that can offer protection from drones. Building their own tools can be soldiers’ best option, since Western military technology is typically expensive and domestic production is insufficient. Flash recommends buying most of the parts on AliExpress, the Chinese e-commerce platform, to reduce costs.

While all his activity suggests a close or at least cooperative relationship between Flash and Ukraine’s military, he sometimes finds himself on the outside looking in. In a post on Telegram in May, as well as during one of our meetings, Flash shared one of his greatest disappointments of the war: the military’s refusal of his proposal to create a database of all the radio frequencies used by Ukrainian forces. But when I mentioned this to an employee of a major electronic warfare company, who requested anonymity to speak about the sensitive subject, he suggested that the only reason Flash still complains about this is that the military hasn’t told him it already exists. (Given its sensitivity, MIT Technology Review was unable to independently confirm the existence of this database.) 

Flash believes that generals in Kyiv “do not understand how these military people are fighting on the front.” So even though he doesn’t like the risks they involve, he takes trips to the frontline about once a month.
EMRE ÇAYLAK

This anecdote is emblematic of Flash’s frustration with a military complex that may not always want his involvement. Ukraine’s armed forces, he has told me on several occasions, make no attempt to collaborate with him in an official manner. He claims not to receive any financial support, either. “I’m trying to help,” he says. “But nobody wants to help me.”

Both Flash and Yurii Pylypenko, another radio enthusiast who helps Flash manage his Telegram channel, say military officials have accused Flash of sharing too much information about Ukraine’s operations. Flash claims to verify every member of his closed Signal groups, which he says only discuss “technical issues” in any case. But he also admits the system is not perfect and that Russians could have gained access in the past. Several of the soldiers I interviewed for this story also claimed to have entered the groups without Flash’s verification process. 

It’s ultimately difficult to determine if some senior staff in the military hold Flash at arm’s length because of his regular, often strident criticism—or whether Flash’s criticism is the result of being held at arm’s length. But it seems unlikely either side’s grievances will subside soon; Pylypenko claims that senior officers have even tried to blackmail him over his involvement in Flash’s work. “They blame my help,” he wrote to me over Telegram, “because they think Serhii is a Russian agent reposting Russian propaganda.” 

Is the world prepared?

Flash’s greatest concern now is the prospect of Russia overwhelming Ukrainian forces with the cheap FPV drones. When they first started deploying FPVs, both sides were almost exclusively targeting expensive equipment. But as production has increased, they’re now using them to target individual soldiers, too. Because of Russia’s production superiority, this poses a serious danger—both physical and psychological—to Ukrainian soldiers. “Our army will be sitting under the ground because everybody who goes above ground will be killed,” Flash says. Some reports suggest that the prevalence of FPVs is already making it difficult for soldiers to expose themselves at all on the battlefield.

To combat this threat, Flash has a grand yet straightforward idea. He wants Ukraine to build a border “wall” of jamming systems that cover a broad range of the radio spectrum all along the front line. Russia has already done this itself with expensive vehicle-based systems, but these present easy targets for Ukrainian drones, which have destroyed several of them. Flash’s idea is to use a similar strategy, albeit with smaller, cheaper systems that are easier to replace. He claims, however, that military officials have shown no interest.

Although Flash is unwilling to divulge more details about this strategy (and who exactly he pitched it to), he believes that such a wall could provide a more sustainable means of protecting Ukrainian troops. Nevertheless, it’s difficult to say how long such a defense might last. Both sides are now in the process of developing artificial-intelligence programs that allow drones to lock on to targets while still outside enemy jamming range, rendering them jammer-proof when they come within it. Flash admits he is concerned—and he doesn’t appear to have a solution.

Flash admits he is worried about Russia overwhelming Ukrainian forces with the cheap FPV drones: “Our army will be sitting under the ground because everybody who goes above ground will be killed.”
EMRE ÇAYLAK

He’s not alone. The world is entirely unprepared for this new type of warfare, says Yaroslav Kalinin, a former Ukrainian intelligence officer and the CEO of Infozahyst, a manufacturer of equipment for electronic warfare. Kalinin recounts talking at an electronic-warfare-focused conference in Washington, DC, last December where representatives from some Western defense companies weren’t able to recognize the basic radio signals emitted by different types of drones. “Governments don’t count [drones] as a threat,” he says. “I need to run through the streets like a prophet—the end is near!”

Nevertheless, Ukraine has become, in essence, a laboratory for a new era of drone warfare—and, many argue, a new era of warfare entirely. Ukraine’s and Russia’s soldiers are its technicians. And Flash, who sometimes sleeps curled up in the back of his van while on the road, is one of its most passionate researchers. “Military developers from all over the world come to us for experience and advice,” he says. Only time will tell whether their contributions will be enough to see Ukraine through to the other side of this war. 

Charlie Metcalfe is a British journalist. He writes for magazines and newspapers, including Wired, the Guardian, and MIT Technology Review.

Meet 2024’s climate innovators under 35

12 September 2024 at 11:00

This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

One way to know where a field is going? Take a look at what the sharpest new innovators are working on.

Good news for all of us: MIT Technology Review’s list of 35 Innovators Under 35 just dropped. And a decent number of the people who made the list are working in fields that touch climate and energy in one way or another.

Looking through, I noticed a few trends that might provide some hints about the future of climate tech. Let’s dig into this year’s list and consider what these innovators’ work might mean for efforts to combat climate change.

Power to the people

Perhaps unsurprisingly, quite a few innovators on this list are working on energy—and many of them have an interest in making energy consistently available where and when it’s needed. Wind and solar are getting cheap, but we need solutions for when the sun isn’t shining and the wind isn’t blowing.

Tim Latimer cofounded Fervo Energy, a geothermal company hoping to provide consistently available, carbon-free energy using Earth’s heat. You may be familiar with his work, since Fervo was on our list of 15 Climate Tech Companies to Watch in 2023.

Another energy-focused innovator on the list is Andrew Ponec of Antora Energy, a company working to build thermal energy storage systems. Basically, the company’s technology heats up blocks when cheap renewables are available, and then stores that heat and delivers it to industrial processes that need constant power. (You, the readers, named thermal energy storage the readers’ choice on this year’s 10 Breakthrough Technologies list.)

Rock stars

While new ways of generating electricity and storing energy can help cut our emissions in the future, other people are focused on how to clean up the greenhouse gases already in the atmosphere. At this point, removing carbon dioxide from the atmosphere is basically required for any scenario where we limit warming to 1.5 °C over preindustrial levels. A few of the new class of innovators are turning to rocks for help soaking up and locking away atmospheric carbon. 

Noah McQueen cofounded Heirloom Carbon Technologies, a carbon removal company. The technology works by tweaking the way minerals soak up carbon dioxide from the air (before releasing it under controlled conditions, so they can do it all again). The company has plans for facilities that could remove hundreds of thousands of tons of carbon dioxide each year. 

Another major area of research focuses on how we might store captured carbon dioxide. Claire Nelson is the cofounder of Cella Mineral Storage, a company working on storage methods to better trap carbon dioxide underground once it’s been mopped up.  

Material world

Finally, some of the most interesting work on our new list of innovators is in materials. Some people are finding new ones that could help us address our toughest problems, and others are trying to reinvent old ones to clean up their climate impacts.

Julia Carpenter found a way to make a foam-like material from metal. Its high surface area makes it a stellar heat sink, meaning it can help cool things down efficiently. It could be a huge help in data centers, where 40% of energy demand goes to cooling.

And I spoke with Cody Finke, cofounder and CEO of Brimstone, a company working on cleaner ways of making cement. Cement alone is responsible for nearly 7% of global greenhouse-gas emissions, and about half of those come from chemical reactions necessary to make it. Finke and Brimstone are working to wipe out the need for these reactions by using different starting materials to make this crucial infrastructural glue.

Addressing climate change is a sprawling challenge, but the researchers and founders on this list are tackling a few of the biggest issues I think about every day. 

Ensuring that we can power our grid, and all the industrial processes that we rely on for the stuff in our daily lives, is one of the most substantial remaining challenges. Removing carbon dioxide from the atmosphere in an efficient, cheap process could help limit future warming and buy us time to clean up the toughest sectors. And finding new materials, and new methods of producing old ones, could be a major key to unlocking new climate solutions. 

To read more about the folks I mentioned here and other innovators working in climate change and beyond, check out the full list.


Now read the rest of The Spark

Related reading

Fervo Energy (cofounded by 2024 innovator Tim Latimer) showed last year that its wells can be used like a giant underground battery.

A growing number of companies—including Antora Energy, whose CEO Andrew Ponec is a 2024 innovator—are working to bring thermal energy storage systems to heavy industry.

Cement is one of our toughest challenges, as Brimstone CEO and 2024 innovator Cody Finke will tell you. I wrote about Brimstone and other efforts to reinvent cement earlier this year.

A plant with yellow flowers

Another thing

We need a whole lot of metals to address climate change, from the copper in transmission lines to the nickel in lithium-ion batteries that power electric vehicles. Some researchers think plants might be able to help. 

Roughly 750 species of plants are so-called hyperaccumulators, meaning they naturally soak up and tolerate relatively high concentrations of metal. A new program is funding research into how we might use this trait to help source nickel, and potentially other metals, in the future. Read the full story here.

Keeping up with climate  

A hurricane that recently formed in the Gulf of Mexico is headed for Louisiana, ending an eerily quiet few weeks of the season. (Scientific American)

→ After forecasters predicted a particularly active season, the lull in hurricane activity was surprising. (New Scientist)

Rising sea levels are one of the symptoms of a changing climate, but nailing down exactly what “sea level” means is more complicated than you might think. We’ve gotten better at measuring sea level over the past few centuries, though. (New Yorker)

The US Department of Energy’s Loan Programs Office has nearly $400 million in lending authority. This year’s election could shift the focus of that office drastically, making it a bellwether of how the results could affect energy priorities. (Bloomberg)

What if fusion power ends up working, but it’s too expensive to play a significant role on the grid? Some modelers think the technology will remain expensive and could come too late to make a dent in emissions. (Heatmap)

Electric-vehicle sales are up overall, but some major automakers are backing away from goals on zero-emissions vehicles. Even though sales are increasing, uptake is slower than many thought it would be, contributing to the nervous energy in the industry. (Canary Media)

It’s a tough time to be in the business of next-generation batteries. The woes of three startups reveal that difficult times are here, likely for a while. (The Information)

❌
❌