The complexity of biology has long been a double-edged sword for scientific and medical progress. On one hand, the intricacy of systems (like the human immune response) offers countless opportunities for breakthroughs in medicine and healthcare. On the other hand, that very complexity has often stymied researchers, leaving some of the most significant medical challenges—like cancer or autoimmune diseases—without clear solutions.
The field needs a way to decipher this incredible complexity. Could the rise of agentic AI, artificial intelligence capable of autonomous decision-making and action, be the key to breaking through this impasse?
Agentic AI is not just another tool in the scientific toolkit but a paradigm shift: by allowing autonomous systems to not only collect and process data but also to independently hypothesize, experiment, and even make decisions, agentic AI could fundamentally change how we approach biology.
The mindboggling complexity of biological systems
To understand why agentic AI holds so much promise, we first need to grapple with the scale of the challenge. Biological systems, particularly human ones, are incredibly complex—layered, dynamic, and interdependent. Take the immune system, for example. It simultaneously operates across multiple levels, from individual molecules to entire organs, adapting and responding to internal and external stimuli in real-time.
Traditional research approaches, while powerful, struggle to account for this vast complexity. The problem lies in the sheer volume and interconnectedness of biological data. The immune system alone involves interactions between millions of cells, proteins, and signaling pathways, each influencing the other in real time. Making sense of this tangled web is almost insurmountable for human researchers.
Enter AI agents: How can they help?
This is where agentic AI steps in. Unlike traditional machine learning models, which require vast amounts of curated data and are typically designed to perform specific, narrow tasks, agentic AI systems can ingest unstructured and diverse datasets from multiple sources and can operate autonomously with a more generalist approach.
Beyond this, AI agents are unbound by conventional scientific thinking. They can connect disparate domains and test seemingly improbable hypotheses that may reveal novel insights. What might initially appear as a counterintuitive series of experiments could help uncover hidden patterns or mechanisms, generating new knowledge that can form the foundation for breakthroughs in areas like drug discovery, immunology, or precision medicine.
These experiments are executed at unprecedented speed and scale through robotic, fully automated laboratories, where AI agents conduct trials in a continuous, round-the-clock workflow. These labs, equipped with advanced automation technologies, can handle everything from ordering reagents, preparing biological samples, to conducting high-throughput screenings. In particular, the use of patient-derived organoids—3D miniaturized versions of organs and tissues—enables AI-driven experiments to more closely mimic the real-world conditions of human biology. This integration of agentic AI and robotic labs allows for large-scale exploration of complex biological systems, and has the potential to rapidly accelerate the pace of discovery.
From agentic AI to AGI
As agentic AI systems become more sophisticated, some researchers believe they could pave the way for artificial general intelligence (AGI) in biology. While AGI—machines with the capacity for general intelligence equivalent to humans—remains a distant goal in the broader AI community, biology may be one of the first fields to approach this threshold.
Why? Because understanding biological systems demands exactly the kind of flexible, goal-directed thinking that defines AGI. Biology is full of uncertainty, dynamic systems, and open-ended problems. If we build AI that can autonomously navigate this space—making decisions, learning from failure, and proposing innovative solutions—we might be building AGI specifically tailored to the life sciences.
Owkin’s next frontier: Unlocking the immune system with agentic AI
Agentic AI has already begun pushing the boundaries of what’s possible in biology, but the next frontier lies in fully decoding one of the most complex and crucial systems in human health: the immune system. Owkin is building the foundations for an advanced form of intelligence—an AGI—capable of understanding the immune system in unprecedented detail. The next evolution of our AI ecosystem, called Owkin K, could redefine how we understand, detect, and treat immune-related diseases like cancer and immuno-inflammatory disorders.
Owkin K envisions a coordinated community of specialized AI agents that can autonomously access and interpret comprehensive scientific literature, large-scale biomedical data, and tap into the power of Owkin’s discovery engines. These agents are capable of planning and executing experiments in fully automated, robotized wet labs, where patient-derived organoids simulate real-world human biology. The results of these experiments feed back into the system, enabling continuous learning and refinement of the AI agents’ models.
What makes Owkin K particularly exciting is its potential to tackle the immune system—a biological network so complex that human intelligence alone has struggled to unravel it. By deploying AI agents with the ability to explore this intricate web autonomously, the project could reveal new therapeutic targets and strategies for immuno-oncology and autoimmune diseases, potentially accelerating the development of groundbreaking treatments.
Navigating challenges and ethical considerations of agentic AI
Of course, such powerful technology comes with significant challenges and ethical considerations, including trust, security, and transparency.
But we must tackle these challenges as agentic AI becomes more integrated into healthcare and research. For example, we can develop mitigation plans that include rigorous validation protocols, real-time human oversight, and regulatory frameworks designed to ensure safety, accountability, and transparency. By prioritizing ethical design and close collaboration between AI systems and human experts, we can harness the potential of agentic AI while minimizing its risks.
The future of biological research with agentic AI
Agentic AI has the potential to reshape not just healthcare, but the very foundations of biological research. By allowing autonomous systems to explore the unknown, we may unlock new levels of understanding in areas like immunology, neuroscience, and genomics—fields that are currently constrained by the limits of human comprehension.
We could soon see a world where AI-driven labs operate around the clock, pushing the boundaries of biology at speeds and scales that far exceed human capabilities. This would not only accelerate scientific discovery but also create new possibilities for personalized medicine, disease prevention, and even longevity.
In the end, agentic AI may be more than just another tool for researchers. It could be the key to understanding life itself—one autonomous decision at a time.
Davide Mantiero, PhD, Eric Durand, PhD, and Darius Meadon also contributed to this article.
This content was produced by Owkin. It was not written by MIT Technology Review’s editorial staff.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
The AI lab waging a guerrilla war over exploitative AI
Back in 2022, the tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAI’s DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados.
But artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work.
Ben Zhao, a computer security researcher at the University of Chicago, was listening. He and his colleagues have built arguably the most prominent weapons in an artist’s arsenal against nonconsensual AI scraping: two tools called Glaze and Nightshade that add barely perceptible perturbations to an image’s pixels so that machine-learning models cannot read them properly.
But Zhao sees the tools as part of a battle to slowly tilt the balance of power from large corporations back to individual creators. Read the full story.
—Melissa Heikkilä
Have we entered the golden age of plant engineering?
In the 1960s, biologists’ selective breeding of plants helped spark a period of transformative agricultural innovation known as the Green Revolution. By the 1990s, the yields of wheat and rice had doubled worldwide, staving off bouts of recurring famine.
The Green Revolution was so successful that dire predictions of worse famine to come—fueled by alarming population growth—no longer seemed likely. But it had its limits—only so much yield could be coaxed from plants using conventional breeding techniques.
Now, more precise gene-editing technologies could shave years off the time it takes for new plant varieties to make it from the lab to federally approved seed products. Read the full story.
—Bill Gourgey
This piece is from the latest print issue of MIT Technology Review, which is all about the weird and wonderful world of food. If you don’t already, subscribe to receive future copies once they land.
MIT Technology Review Narrated: Is robotics about to have its own ChatGPT moment?
Robots that can do many of the things humans do in the home have been a dream of robotics research since the inception of the field in the 1950s.
While engineers have made great progress in getting robots to work in tightly controlled environments like labs and factories, the home has proved difficult to design for. But now, the field is at an inflection point. A new generation of researchers believes that generative AI could give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into the mainstream.
This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump wants Elon Musk to maximize government efficiency Despite claiming to be a department, technically it’s more of an advisory board. (Wired $) + It will allegedly operate outside of the federal government. (WSJ $) + Expect Musk to treat the US government like his loss-making social network. (Bloomberg $)
2 The crypto industry has already started lobbying Trump Executives are wasting no time in presenting the President-elect with their wish lists. (NYT $) + We’re witnessing the industry’s nascent attempts to make itself institutional. (NY Mag $) + The Trump Pump is showing no signs of slowing. (CNN)
3 Advertisers are considering staging a return to X In a bid to curry favor with Musk and his political leverage. (FT $) + Silicon Valley is decidedly more Trump-friendly than it used to be. (Insider $) + Bluesky is starting to look more and more appealing. (Slate $)
4 Major AI players are struggling to make new breakthroughs Funneling money into new products isn’t having the desired result. (Bloomberg $)
5 The world’s e-waste is actually pretty valuable There’s a lot of gold to be stripped out from those old circuit boards. (Economist $) +AI will add to the e-waste problem. Here’s what we can do about it. (MIT Technology Review)
6 DNA testing is ushering in a new age of discrimination And you could be denied medical or life insurance because of it. (The Atlantic $) + How to… delete your 23andMe data. (MIT Technology Review)
7 How to build the perfect humanoid robot Unfortunately, they’ll be found in factories and warehouses before they make it to our homes. (IEEE Spectrum) + A skeptic’s guide to humanoid-robot videos. (MIT Technology Review)
8 The US is using AI to seek out critical minerals Access to regular supplies could lessen its reliance on China and Russia. (Undark Magazine) + The race to produce rare earth elements. (MIT Technology Review)
9 Apple’s AirTags can now share their location with airlines Which should (hopefully) minimize the chances of losing your luggage. (WP $) + Its next device? An AI wall-mounted tablet, supposedly. (Bloomberg $)
10 This new mathematics benchmark is being kept secret To prevent AI models from training against it. (Ars Technica) + This AI system makes human tutors better at teaching children math. (MIT Technology Review)
Quote of the day
“Don’t bring a watermark to a gunfight.”
—AI researcher Oren Etzioni warns the industry to avoid putting too much faith in voluntary standards to actively prevent malicious actors from gaming the system, TechCrunch reports.
The big story
The great AI consciousness conundrum
October 2023
AI consciousness isn’t just a devilishly tricky intellectual puzzle; it’s a morally weighty problem with potentially dire consequences that philosophers, cognitive scientists, and engineers alike are currently grappling with.
Fail to identify a conscious AI, and you might unintentionally subjugate a being whose interests ought to matter. Mistake an unconscious AI for a conscious one, and you risk compromising human safety and happiness for the sake of an unthinking, unfeeling hunk of silicon and code.
Over the past few decades, a small research community has doggedly attacked the question of what consciousness is and how it works. The effort has yielded real progress. And now, with the rapid advance of AI technology, these insights could offer our only guide to the untested, morally fraught waters of artificial consciousness. Read the full story.
—Grace Huckins
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Small changes can improve your life, from debobbling your clothes to oiling your keyholes. + Woah: these fascinating deep sea creatures can turn back the clock on aging and revert to a more youthful form. + TikTok is really into… onions. Yes, onions. + As if filmmaking wasn’t stressful enough, these movies were all completed in a single take.
Ben Zhao remembers well the moment he officially jumped into the fight between artists and generative AI: when one artist asked for AI bananas.
A computer security researcher at the University of Chicago, Zhao had made a name for himself by building tools to protect images from facial recognition technology. It was this work that caught the attention of Kim Van Deun, a fantasy illustrator who invited him to a Zoom call in November 2022 hosted by the Concept Art Association, an advocacy organization for artists working in commercial media.
On the call, artists shared details of how they had been hurt by the generative AI boom, which was then brand new. At that moment, AI was suddenly everywhere. The tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAI’s DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados.
But these artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work. Some had found that their art had been scraped off the internet and used to train the models, while others had discovered that their own names had become prompts, causing their work to be drowned out online by AI knockoffs.
Zhao remembers being shocked by what he heard. “People are literally telling you they’re losing their livelihoods,” he told me one afternoon this spring, sitting in his Chicago living room. “That’s something that you just can’t ignore.”
So on the Zoom, he made a proposal: What if, hypothetically, it was possible to build a mechanism that would help mask their art to interfere with AI scraping?
“I would love a tool that if someone wrote my name and made a prompt, like, garbage came out,” responded Karla Ortiz, a prominent digital artist. “Just, like, bananas or some weird stuff.”
That was all the convincing Zhao needed—the moment he joined the cause.
Fast-forward to today, and millions of artists have deployed two tools born from that Zoom: Glaze and Nightshade, which were developed by Zhao and the University of Chicago’s SAND Lab (an acronym for “security, algorithms, networking, and data”).
Arguably the most prominent weapons in an artist’s arsenal against nonconsensual AI scraping, Glaze and Nightshade work in similar ways: by adding what the researchers call “barely perceptible” perturbations to an image’s pixels so that machine-learning models cannot read them properly. Glaze, which has been downloaded more than 6 million times since it launched in March 2023, adds what’s effectively a secret cloak to images that prevents AI algorithms from picking up on and copying an artist’s style. Nightshade, which I wrote about when it was released almost exactly a year ago this fall, cranks up the offensive against AI companies by adding an invisible layer of poison to images, which can break AI models; it has been downloaded more than 1.6 million times.
Thanks to the tools, “I’m able to post my work online,” Ortiz says, “and that’s pretty huge.” For artists like her, being seen online is crucial to getting more work. If they are uncomfortable about ending up in a massive for-profit AI model without compensation, the only option is to delete their work from the internet. That would mean career suicide. “It’s really dire for us,” adds Ortiz, who has become one of the most vocal advocates for fellow artists and is part of a class action lawsuit against AI companies, including Stability AI, over copyright infringement.
But Zhao hopes that the tools will do more than empower individual artists. Glaze and Nightshade are part of what he sees as a battle to slowly tilt the balance of power from large corporations back to individual creators.
“It is just incredibly frustrating to see human life be valued so little,” he says with a disdain that I’ve come to see as pretty typical for him, particularly when he’s talking about Big Tech. “And to see that repeated over and over, this prioritization of profit over humanity … it is just incredibly frustrating and maddening.”
As the tools are adopted more widely, his lofty goal is being put to the test. Can Glaze and Nightshade make genuine security accessible for creators—or will they inadvertently lull artists into believing their work is safe, even as the tools themselves become targets for haters and hackers? While experts largely agree that the approach is effective and Nightshade could prove to be powerful poison, other researchers claim they’ve already poked holes in the protections offered by Glaze and that trusting these tools is risky.
But Neil Turkewitz, a copyright lawyer who used to work at theRecording Industry Association of America, offers a more sweeping view of the fight the SAND Lab has joined. It’s not about a single AI company or a single individual, he says: “It’s about defining the rules of the world we want to inhabit.”
Poking the bear
The SAND Lab is tight knit, encompassing a dozen or so researchers crammed into a corner of the University of Chicago’s computer science building. That space has accumulated somewhat typical workplace detritus—a Meta Quest headset here, silly photos of dress-up from Halloween parties there. But the walls are also covered in original art pieces, including a framed painting by Ortiz.
Years before fighting alongside artists like Ortiz against “AI bros” (to use Zhao’s words), Zhao and the lab’s co-leader, Heather Zheng, who is also his wife, had built a record of combating harms posed by new tech.
Though both earned spots on MIT Technology Review’s 35 Innovators Under 35 list for other work nearly two decades ago, when they were at the University of California, Santa Barbara (Zheng in 2005 for “cognitive radios” and Zhao a year later for peer-to-peer networks), their primary research focus has become security and privacy.
The pair left Santa Barbara in 2017, after they were poached by the new co-director of the University of Chicago’s Data Science Institute, Michael Franklin. All eight PhD students from their UC Santa Barbara lab decided to follow them to Chicago too. Since then, the group has developed a “bracelet of silence” that jams the microphones in AI voice assistants like the Amazon Echo. It has also created a tool called Fawkes—“privacy armor,” as Zhao put it in a 2020 interview with the New York Times—that people can apply to their photos to protect them from facial recognition software. They’ve also studied how hackers might steal sensitive information through stealth attacks on virtual-reality headsets, and how to distinguish human art from AI-generated images.
“Ben and Heather and their group are kind of unique because they’re actually trying to build technology that hits right at some key questions about AI and how it is used,” Franklin tells me. “They’re doing it not just by asking those questions, but by actually building technology that forces those questions to the forefront.”
It was Fawkes that intrigued Van Deun, the fantasy illustrator, two years ago; she hoped something similar might work as protection against generative AI, which is why she extended that fateful invite to the Concept Art Association’s Zoom call.
That call started something of a mad rush in the weeks that followed. Though Zhao and Zheng collaborate on all the lab’s projects, they each lead individual initiatives; Zhao took on what would become Glaze, with PhD student Shawn Shan (who was on this year’s Innovators Under 35 list) spearheading the development of the program’s algorithm.
In parallel to Shan’s coding, PhD students Jenna Cryan and Emily Wenger sought to learn more about the views and needs of the artists themselves. They created a user survey that the team distributed to artists with the help of Ortiz. In replies from more than 1,200 artists—far more than the average number of responses to user studies in computer science—the team found that the vast majority of creators had read about art being used to train models, and 97% expected AI to decrease some artists’ job security. A quarter said AI art had already affected their jobs.
Almost all artists also said they posted their work online, and more than half said they anticipated reducing or removing that online work, if they hadn’t already—no matter the professional and financial consequences.
The first scrappy version of Glaze was developed in just a month, at which point Ortiz gave the team her entire catalogue of work to test the model on. At the most basic level, Glaze acts as a defensive shield. Its algorithm identifies features from the image that make up an artist’s individual style and adds subtle changes to them. When an AI model is trained on images protected with Glaze, the model will not be able to reproduce styles similar to the original image.
A painting from Ortiz later became the first image publicly released with Glaze on it: a young woman, surrounded by flying eagles, holding up a wreath. Its title is Musa Victoriosa, “victorious muse.”
It’s the one currently hanging on the SAND Lab’s walls.
Despite many artists’ initial enthusiasm, Zhao says, Glaze’s launch caused significant backlash. Some artists were skeptical because they were worried this was a scam or yet another data-harvesting campaign.
The lab had to take several steps to build trust, such as offering the option to download the Glaze app so that it adds the protective layer offline, which meant no data was being transferred anywhere. (The images are then shielded when artists upload them.)
Soon after Glaze’s launch, Shan also led the development of the second tool, Nightshade. Where Glaze is a defensive mechanism, Nightshade was designed to act as an offensive deterrent to nonconsensual training. It works by changing the pixels of images in ways that are not noticeable to the human eye but manipulate machine-learning models so they interpret the image as something different from what it actually shows. If poisoned samples are scraped into AI training sets, these samples trick the AI models: Dogs become cats, handbags become toasters. The researchers say only a relatively few examples are enough to permanently damage the way a generative AI model produces images.
Currently, both tools are available as free apps or can be applied through the project’s website. The lab has also recently expanded its reach by offering integration with the new artist-supported social network Cara, which was born out of a backlash to exploitative AI training and forbids AI-produced content.
In dozens of conversations with Zhao and the lab’s researchers, as well as a handful of their artist-collaborators, it’s become clear that both groups now feel they are aligned in one mission. “I never expected to become friends with scientists in Chicago,” says Eva Toorenent, a Dutch artist who worked closely with the team on Nightshade. “I’m just so happy to have met these people during this collective battle.”
Her painting Belladonna, which is also another name for the nightshade plant, was the first image with Nightshade’s poison on it.
“It’s so symbolic,” she says. “People taking our work without our consent, and then taking our work without consent can ruin their models. It’s just poetic justice.”
No perfect solution
The reception of the SAND Lab’s work has been less harmonious across the AI community.
After Glaze was made available to the public, Zhao tells me, someone reported it to sites like VirusTotal, which tracks malware, so that it was flagged by antivirus programs. Several people also started claiming on social media that the tool had quickly been broken. Nightshade similarly got a fair share of criticism when it launched; as TechCrunchreported in January, some called it a “virus” and, as the story explains, “another Reddit user who inadvertently went viral on X questioned Nightshade’s legality, comparing it to ‘hacking a vulnerable computer system to disrupt its operation.’”
“We had no idea what we were up against,” Zhao tells me. “Not knowing who or what the other side could be meant that every single new buzzing of the phone meant that maybe someone did break Glaze.”
Both tools, though, have gone through rigorous academic peer review and have won recognition from the computer security community. Nightshade was accepted at the IEEE Symposium on Security and Privacy, and Glaze received a distinguished paper award and the 2023 Internet Defense Prize at the Usenix Security Symposium, a top conference in the field.
“In my experience working with poison, I think [Nightshade is] pretty effective,” says Nathalie Baracaldo, who leads the AI security and privacy solutions team at IBM and has studied data poisoning. “I have not seen anything yet—and the word yet is important here—that breaks that type of defense that Ben is proposing.” And the fact that the team has released the source code for Nightshade for others to probe, and it hasn’t been broken, also suggests it’s quite secure, she adds.
At the same time, at least one team of researchers does claim to have penetrated the protections of Glaze, or at least an old version of it.
As researchers from Google DeepMind and ETH Zurich detailed in a paper published in June, they found various ways Glaze (as well as similar but less popular protection tools, such as Mist and Anti-DreamBooth) could be circumvented using off-the-shelf techniques that anyone could access—such as image upscaling, meaning filling in pixels to increase the resolution of an image as it’s enlarged. The researchers write that their work shows the “brittleness of existing protections” and warn that “artists may believe they are effective. But our experiments show they are not.”
Florian Tramèr, an associate professor at ETH Zurich who was part of the study, acknowledges that it is “very hard to come up with a strong technical solution that ends up really making a difference here.” Rather than any individual tool, he ultimately advocates for an almost certainly unrealistic ideal: stronger policies and laws to help create an environment in which people commit to buying only human-created art.
What happened here is common in security research, notes Baracaldo: A defense is proposed, an adversary breaks it, and—ideally—the defender learns from the adversary and makes the defense better. “It’s important to have both ethical attackers and defenders working together to make our AI systems safer,” she says, adding that “ideally, all defenses should be publicly available for scrutiny,” which would both “allow for transparency” and help avoid creating a false sense of security. (Zhao, though, tells me the researchers have no intention to release Glaze’s source code.)
Still, even as all these researchers claim to support artists and their art, such tests hit a nerve for Zhao. In Discord chats that were later leaked, he claimed that one of the researchers from the ETH Zurich–Google DeepMind team “doesn’t give a shit” about people. (That researcher did not respond to a request for comment, but in a blog post he said it was important to break defenses in order to know how to fix them. Zhao says his words were taken out of context.)
Zhao also emphasizes to me that the paper’s authors mainly evaluated an earlier version of Glaze; he says its new update is more resistant to tampering. Messing with images that have current Glaze protections would harm the very style that is being copied, he says, making such an attack useless.
This back-and-forth reflects a significant tension in the computer security community and, more broadly, the often adversarial relationship between different groups in AI. Is it wrong to give people the feeling of security when the protections you’ve offered might break? Or is it better to have some level of protection—one that raises the threshold for an attacker to inflict harm—than nothing at all?
Yves-Alexandre de Montjoye, an associate professor of applied mathematics and computer science at Imperial College London, says there are plenty of examples where similar technical protections have failed to be bulletproof. For example, in 2023, de Montjoye and his team probed a digital mask for facial recognition algorithms, which was meant to protect the privacy of medical patients’ facial images; they were able to break the protections by tweaking just one thing in the program’s algorithm (which was open source).
Using such defenses is still sending a message, he says, and adding some friction to data profiling. “Tools such as TrackMeNot”—which protects users from data profiling—“have been presented as a way to protest; as a way to say I do not consent.”
“But at the same time,” he argues, “we need to be very clear with artists that it is removable and might not protect against future algorithms.”
While Zhao will admit that the researchers pointed out some of Glaze’s weak spots, he unsurprisingly remains confident that Glaze and Nightshade are worth deploying, given that “security tools are never perfect.” Indeed, as Baracaldo points out, the Google DeepMind and ETH Zurich researchers showed how a highly motivated and sophisticated adversary will almost certainly always find a way in.
Yet it is “simplistic to think that if you have a real security problem in the wild and you’re trying to design a protection tool, the answer should be it either works perfectly or don’t deploy it,” Zhao says, citing spam filters and firewalls as examples. Defense is a constant cat-and-mouse game. And he believes most artists are savvy enough to understand the risk.
Offering hope
The fight between creators and AI companies is fierce. The current paradigm in AI is to build bigger and bigger models, and there is, at least currently, no getting around the fact that they require vast data sets hoovered from the internet to train on. Tech companies argue that anything on the public internet is fair game, and that it is “impossible” to build advanced AI tools without copyrighted material; many artists argue that tech companies have stolen their intellectual propertyand violated copyright law,and that they need ways to keep their individual works out of the models—or at least receive proper credit and compensation for their use.
So far, the creatives aren’t exactly winning. A number of companies have already replaced designers, copywriters, and illustrators with AI systems. In one high-profile case, Marvel Studios used AI-generated imagery instead of human-created art in the title sequence of its 2023 TV series Secret Invasion. In another, a radio station fired its human presenters and replaced them with AI. The technology has become a major bone of contention between unions and film, TV, and creative studios, most recently leading to a strike by video-game performers. There are numerous ongoing lawsuits by artists, writers, publishers, and record labels against AI companies. It will likely take years until there is a clear-cut legal resolution. But even a court ruling won’t necessarily untangle the difficult ethical questions created by generative AI.Any future government regulation is not likely to either, if it ever materializes.
That’s why Zhao and Zheng see Glaze and Nightshade as necessary interventions—tools to defend original work, attack those who would help themselves to it, and, at the very least, buy artists some time. Having a perfect solution is not really the point. The researchers need to offer something now because the AI sector moves at breakneck speed, Zheng says, means that companies are ignoring very real harms to humans. “This is probably the first time in our entire technology careers that we actually see this much conflict,” she adds.
On a much grander scale, she and Zhao tell me they hope that Glaze and Nightshade will eventually have the power to overhaul how AI companies use art and how their products produce it. It is eye-wateringly expensive to train AI models, and it’s extremely laborious for engineers to find and purge poisoned samples in a data set of billions of images. Theoretically, if there are enough Nightshaded images on the internet and tech companies see their models breaking as a result, it could push developers to the negotiating table to bargain over licensing and fair compensation.
That’s, of course, still a big “if.” MIT Technology Review reached out to several AI companies, such as Midjourney and Stability AI, which did not reply to requests for comment. A spokesperson for OpenAI, meanwhile, did not confirm any details about encountering data poison but said the company takes the safety of its products seriously and is continually improving its safety measures: “We are always working on how we can make our systems more robust against this type of abuse.”
In the meantime, the SAND Lab is moving ahead and looking into funding from foundations and nonprofits to keep the project going. They also say there has also been interest from major companies looking to protect their intellectual property (though they decline to say which), and Zhao and Zheng are exploring how the tools could be applied in other industries, such as gaming, videos, or music. In the meantime, they plan to keep updating Glaze and Nightshade to be as robust as possible, working closely with the students in the Chicago lab—where, on another wall, hangs Toorenent’s Belladonna. The painting has a heart-shaped note stuck to the bottom right corner: “Thank you! You have given hope to us artists.”
This story has been updated with the latest download figures for Glaze and Nightshade.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Generative AI taught a robot dog to scramble around a new environment
Teaching robots to navigate new environments is tough. You can train them on physical, real-world data taken from recordings made by humans, but that’s scarce, and expensive to collect. Digital simulations are a rapid, scalable way to teach them to do new things, but the robots often fail when they’re pulled out of virtual worlds and asked to do the same tasks in the real one.
Now, there’s potentially a better option: a new system that uses generative AI models in conjunction with a physics simulator to develop virtual training grounds that more accurately mirror the physical world. Robots trained using this method worked with a higher success rate than those trained using more traditional techniques during real-world tests.
Researchers used the system, called LucidSim, to train a robot dog in parkour, getting it to scramble over a box and climb stairs, despite never seeing any real world data. The approach demonstrates how helpful generative AI could be when it comes to teaching robots to do challenging tasks. It also raises the possibility that we could ultimately train them in entirely virtual worlds. Read the full story.
—Rhiannon Williams
Africa’s AI researchers are ready for takeoff
When we talk about the global race for AI dominance, the conversation often focuses on tensions between the US and China, and European efforts at regulating the technology. But it’s high time we talk about another player: Africa.
African AI researchers are forging their own path, developing tools that answer the needs of Africans, in their own languages. Their story is not only one of persistence and innovation, but of preserving cultures and fighting to shape how AI technologies are used on their own continent. However, they face many barriers. Read the full story.
—Melissa Heikkilä
This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 How Silicon Valley is planning to work with Donald Trump Avoiding antitrust regulation and boosting growth are at the top of Big Tech’s agenda. (WP $) + Tech executives overwhelmingly supported Kamala Harris. (Vox) + Trump’s policies could make it harder to hire and retain overseas talent. (Insider $) + Immigrant tech workers are rushing to secure visas before Trump’s inauguration. (Forbes $)
2 People are abandoning X following the US election result Threads and Bluesky are experiencing an influx of new users. (Bloomberg $) + Trump loved Twitter during his first Presidency. Will he during his second? (Insider $)
3 The Biden administration plans to back a controversial cybercrime treaty Critics fear it could be abused by authoritarian regimes to pursue dissidents. (Politico)+ The treaty would also make electronic evidence more available to the US. (Bloomberg $)
4 DNA testing firm 23andMe is firing 40% of its workforce Things aren’t looking good for the embattled company. (WSJ $) + The company is axing all its therapy programs, too. (Reuters) + How to delete your 23andMe data. (MIT Technology Review)
5 How oil and gas companies are masking their methane emissions The odorless, colorless gas is notoriously tough to track, but satellites are changing that. (FT $) + Even if we reach net zero, parts of the planet will keep getting warmer. (New Scientist $)+ Why methane emissions are still a mystery. (MIT Technology Review)
6 This database tracks license plate cameras across the world The project, called DeFlock, aims to give drivers the choice to avoid certain routes. (404 Media)
7 Baidu has unveiled its AI-integrated smart glasses The device can track calorie consumption, among other features. (FT $) + Smartglasses are a growing trend in China. (SCMP $) + The coolest thing about smart glasses is not the AR. It’s the AI. (MIT Technology Review)
8 Everything we know about Uranus is wrong A brief flyby 40 years ago coincided with a rare spike in solar activity. (NYT $)
9 How Ukraine is rewilding amid the war Ecologists believe the conflict’s catastrophes can birth environmental gains. (Undark Magazine) + Ukraine has a plan for getting Trump onside. (Vox)
10 To find alien life, look to the mountains Who knows what’s trapped under tectonic plates? (The Atlantic $)
Quote of the day
“I did not say I was uncomfortable talking about it. I said we’re not going to talk about it.”
—Michael Barratt, an astronaut and medical doctor, refuses to elaborate on a medical issue an astronaut experienced during a recent mission, Ars Technica reports.
The big story
Zimbabwe’s climate migration is a sign of what’s to come
December 2021
Julius Mutero has spent his entire adult life farming a three-hectare plot in Zimbabwe, but has harvested virtually nothing in the past six years. He is just one of the 86 million people in sub-Saharan Africa who the World Bank estimates will migrate domestically by 2050 because of climate change.
In Zimbabwe, farmers who have tried to stay put and adapt have found their efforts woefully inadequate in the face of new weather extremes. Droughts have already forced tens of thousands from their homes. But their desperate moves are creating new competition for water in the region, and tensions may soon boil over. Read the full story.
—Andrew Mambondiyani
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Here’s how to make perfect cacio e pepe every time. + New York is a wonderful place—even if you’re a native New Yorker, there’s always something new to try for the first time. + The 2024 Nature’s Best Photo Awards are full of delights. + Good luck to the brave souls skiing in central London.
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
When we talk about the global race for AI dominance, the conversation often focuses on tensions between the US and China, and European efforts at regulating the technology.
But it’s high time we talked about another player: Africa.
As MIT Technology Review has written before, AI is creating a new colonial world order, where the technology is enriching a small minority of people at the expense of the rest of the world.
African AI researchers are determined to change that. They’re forging their own path, developing tools that answer the needs of Africans, in their own languages.
However, they face many barriers. AI research is eye-wateringly expensive, and African startups and researchers get a fraction as much funding as their Western or Asian counterparts. They have to innovate and rely on open-source resources to do more with less.
Despite that, the African AI story reflects not only persistence and innovation, but a determination to preserve cultures and shape how AI technologies are used on the continent. Read more here from Abdullahi Tsanni, who went to this year’s Deep Learning Indaba, a machine-learning conference held annually in Senegal, to learn about the opportunities and barriers the African AI scene faces.
And then some personal news! This edition will be my last newsletter, and from next week you’ll be in the extremely capable hands of my colleague James O’Donnell. It’s been a delight writing this newsletter for the past two or so years, and I’m so grateful you’ve joined me on this journey covering everything from snowballs of bullshit to Taylor Swift’s deepfakes. I’m not going anywhere, though. I’ll be diving deeper into the AI beat at MIT Technology Review to bring you stories on what’s happening in AI and how the technology is changing us and our societies. Stay tuned for more!
Finally, while I have you, this week we’re running our biggest sale of the year, with 50% off an annual subscription to MIT Technology Review. New subscribers receive a free digital report on generative AI and the future of work. Subscribe here.
Now read the rest of The Algorithm
Deeper Learning
Why AI could eat quantum computing’s lunch
Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics. Those expectations have been especially high in physics and chemistry, where the weird effects of quantum mechanics come into play. In theory, this is where quantum computers could have a huge advantage over conventional machines.
Enter AI: But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all.
Given the pace of recent advances, a growing number of researchers are now asking whether AI could solve a substantial chunk of the most interesting problems in chemistry and materials science before large-scale quantum computers become a reality. Read more from Edd Gent here.
Bits and Bytes
The Saudis are planning a $100 billion AI powerhouse Speaking of the race for AI dominance, this piece looks at how Saudi Arabia wants in on AI action. And it’s putting its money where its mouth is. The country is investing a massive sum to develop a tech hub that it hopes will rival the neighboring United Arab Emirates. (Bloomberg)
AI is making it harder to believe what is real and what is not Two recent examples show just how influential AI slop can be in warping our sense of reality. In Dublin, crowds gathered in the city center to wait for a Halloween parade to take place. There was no parade planned, but the listing was created by AI and then picked up by social media users and local media. By way of contrast, some social media users dismissed shocking images of the devastating recent floods in Spain as AI-generated, although they were entirely real.
AI companies are getting comfortable offering their technology to the military Militaries around the world have been pouring money into new technologies, including AI. Meta and Anthropic are the latest tech companies to start courting them, joining the likes of Google and OpenAI. (The Washington Post)
OpenAI is shifting its strategy as the improvement in its AI tools slows down The current paradigm in AI development is to make things bigger to make them better. But OpenAI’s new model, code-named Orion, only performs slightly better than its predecessors. Instead, OpenAI is shifting to improving models after their initial training. (The Information)
Teaching robots to navigate new environments is tough. You can train them on physical, real-world data taken from recordings made by humans, but that’s scarce and expensive to collect. Digital simulations are a rapid, scalable way to teach them to do new things, but the robots often fail when they’re pulled out of virtual worlds and asked to do the same tasks in the real one.
Now there’s a potentially better option: a new system that uses generative AI modelsin conjunction with a physics simulator to develop virtual training grounds that more accurately mirror the physical world. Robots trained using this method achieved a higher success rate in real-world tests than those trained using more traditional techniques.
Researchers used the system, called LucidSim, to train a robot dog in parkour, getting it to scramble over a box and climb stairs even though it had never seen any real-world data. The approach demonstrates how helpful generative AI could be when it comes to teaching robots to do challenging tasks. It also raises the possibility that we could ultimately train them in entirely virtual worlds. The research was presented at the Conference on Robot Learning (CoRL) last week.
“We’re in the middle of an industrial revolution for robotics,” says Ge Yang, a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratory, who worked on the project. “This is our attempt at understanding the impact of these [generative AI] models outside of their original intended purposes, with the hope that it will lead us to the next generation of tools and models.”
LucidSim uses a combination of generative AI models to create the visual training data. First the researchers generated thousands of prompts for ChatGPT, getting it to create descriptions of a range of environments that represent the conditions the robot would encounter in the real world, including different types of weather, times of day, and lighting conditions. These included “an ancient alley lined with tea houses and small, quaint shops, each displaying traditional ornaments and calligraphy” and “the sun illuminates a somewhat unkempt lawn dotted with dry patches.”
These descriptions were fed into a system that maps 3D geometry and physics data onto AI-generated images, creating short videos mapping a trajectory for the robot to follow. The robot draws on this information to work out the height, width, and depth of the things it has to navigate—a box or a set of stairs, for example.
The researchers tested LucidSim by instructing a four-legged robot equipped with a webcam to complete several tasks, including locating a traffic cone or soccer ball, climbing over a box, and walking up and down stairs. The robot performed consistently better than when it ran a system trained on traditional simulations. In 20 trials to locate the cone, LucidSim had a 100% success rate, versus 70% for systems trained on standard simulations. Similarly, LucidSim reached the soccer ball in another 20 trials 85% of the time, and just 35% for the other system.
Finally, when the robot was running LucidSim, it successfully completed all 10 stair-climbing trials, compared with just 50% for the other system.
These results are likely to improve even further in the future if LucidSim draws directly from sophisticated generative video models rather than a rigged-together combination of language, image, and physics models, says Phillip Isola, an associate professor at MIT who worked on the research.
The researchers’ approach to using generative AI is a novel one that will pave the way for more interesting new research, says Mahi Shafiullah, a PhD student at New York University who is using AI models to train robots. He did not work on the project.
“The more interesting direction I see personally is a mix of both real and realistic ‘imagined’ data that can help our current data-hungry methods scale quicker and better,” he says.
The ability to train a robot from scratch purely on AI-generated situations and scenarios is a significant achievement and could extend beyond machines to more generalized AI agents, says Zafeirios Fountas, a senior research scientist at Huawei specializing in brain‑inspired AI.
“The term ‘robots’ here is used very generally; we’re talking about some sort of AI that interacts with the real world,” he says. “I can imagine this being used to control any sort of visual information, from robots and self-driving cars up to controlling your computer screen or smartphone.”
In terms of next steps, the authors are interested in trying to train a humanoid robot using wholly synthetic data—which they acknowledge is an ambitious goal, as bipedal robots are typically less stable than their four-legged counterparts. They’re also turning their attention to another new challenge: using LucidSim to train the kinds of robotic arms that work in factories and kitchens. The tasks they have to perform require a lot more dexterity and physical understanding than running around a landscape.
“To actually pick up a cup of coffee and pour it is a very hard, open problem,” says Isola. “If we could take a simulation that’s been augmented with generative AI to create a lot of diversity and train a very robust agent that can operate in a café, I think that would be very cool.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What Africa needs to do to become a major AI player
Africa is still early in the process of adopting AI technologies. But researchers say the continent is uniquely hospitable to it for several reasons, including a relatively young and increasingly well-educated population, a rapidly growing ecosystem of AI startups, and lots of potential consumers.
However, ambitious efforts to develop AI tools that answer the needs of Africans face numerous hurdles. The biggest are inadequate funding and poor infrastructure. Limited internet access and a scarcity of domestic data centers also mean that developers might not be able to deploy cutting-edge AI capabilities. Complicating this further is a lack of overarching policies or strategies for harnessing AI’s immense benefits—and regulating its downsides.
Taken together, researchers worry, these issues will hold Africa’s AI sector back and hamper its efforts to pave its own pathway in the global AI race. Read the full story.
—Abdullahi Tsanni
Science and technology stories in the age of Trump
—Mat Honan
I’ve spent most of this year being pretty convinced that Donald Trump would be the 47th president of the United States. Even so, like most people, I was completely surprised by the scope of his victory. This level of victory will certainly provide the political capital to usher in a broad sweep of policy changes.
Some of these changes will be well outside our lane as a publication. But very many of President-elect Trump’s stated policy goals will have direct impacts on science and technology.
So I thought I would share some of my remarks from our edit meeting on Wednesday morning, when we woke up to find out that the world had indeed changed. Read the full story.
Thisstory is from The Debrief, the weekly newsletter from our editor in chief Mat Honan. Sign up to receive it in your inbox every Friday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Canada has recorded its first known bird flu case in a human Officials are investigating how the teenager was exposed to the virus. (NPR) + Canada insists that the risk to the public remains low. (Reuters) + Why virologists are getting increasingly nervous about bird flu. (MIT Technology Review)
2 How MAGA became a rallying call for young men The Republicans’ online strategy tapped into the desires of disillusioned Gen Z men. (WP $) + Elon Musk is assembling a list of favorable would-be Trump advisors. (FT $)
3 Trump’s victory is a win for the US defense industry Palmer Luckey’s Anduril is anticipating a lucrative next four years. (Insider $) + Here’s what Luckey has to say about the Pentagon’s future of mixed reality. (MIT Technology Review) + Traditional weapons are being given AI upgrades. (Wired $)
4 This year is highly likely to be the hottest on record This week’s Cop29 climate summit will thrash out future policies. (The Guardian) + A little-understood contributor to the weather? Microplastics. (Wired $) + Trump’s win is a tragic loss for climate progress. (MIT Technology Review)
5 Ukraine is scrambling to repair its power stations Workers are dismantling plants to repair other stations hit by Russian attacks. (WSJ $) + Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review)
6 We need better ways to evaluate LLMs Tech giants are coming up with better methods of measuring these systems. (FT $) + The improvements in the tech behind ChatGPT appear to be slowing. (The Information $) + AI hype is built on high test scores. Those tests are flawed. (MIT Technology Review)
7 FTX is suing crypto exchange Binance It claims Sam Bankman-Fried fraudulently transferred close to $1.8 billion to Binance in 2021. (Bloomberg $) + Meanwhile, bitcoin is surging to new record heights. (Reuters)
8 What we know about tech and loneliness While there’s little evidence tech directly makes us lonely, there’s a strong correlation between the two. (NYT $)
9 What’s next for space policy in the US If one person’s interested in the cosmos, it’s Elon Musk. (Ars Technica)
10 Could you save the Earth from a killer asteroid? It’s a game that’s part strategy, part luck. (New Scientist $) + Earth is probably safe from a killer asteroid for 1,000 years. (MIT Technology Review)
Quote of the day
“‘Conflict of interest’ seems rather quaint.”
—Gita Johar, a professor at Columbia Business School, tells the Guardian about Donald Trump and Elon Musk’s openly transactional relationship.
The big story
Quartz, cobalt, and the waste we leave behind
May 2024
It is easy to convince ourselves that we now live in a dematerialized ethereal world, ruled by digital startups, artificial intelligence, and financial services.
Yet there is little evidence that we have decoupled our economy from its churning hunger for resources. We are still reliant on the products of geological processes like coal and quartz, a mineral that’s a rich source of the silicon used to build computer chips, to power our world.
Three recent books aim to reconnect readers with the physical reality that underpins the global economy. Each one fills in dark secrets about the places, processes, and lived realities that make the economy tick, and reveals just how tragic a toll the materials we rely on take for humans and the environment. Read the full story.
—Matthew Ponsford
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Oscars buzz has already begun, and this year’s early contenders are an interesting bunch. + This sweet art project shows how toys age with love + Who doesn’t love pretzels? Here’s how to make sure they end up with the perfect fluffy interior and a glossy, chewy crust. + These images of plankton are really quite something.
Rather than analyzing the news this week, I thought I’d lift the hood a bit on how we make it.
I’ve spent most of this year being pretty convinced that Donald Trump would be the 47th president of the United States. Even so, like most people, I was completely surprised by the scope of his victory. By taking the lion’s share not just in the Electoral College but also the popular vote, coupled with the wins in the Senate (and, as I write this, seemingly the House) and ongoing control of the courts, Trump has done far more than simply eke out a win. This level of victory will certainly provide the political capital to usher in a broad sweep of policy changes.
Some of these changes will be well outside our lane as a publication. But very many of President-elect Trump’s stated policy goals will have direct impacts on science and technology. Some of the proposed changes would have profound effects on the industries and innovations we’ve covered regularly, and for years. When he talks about his intention to end EV subsidies, hit the brakes on FTC enforcement actions on Big Tech, ease the rules on crypto, or impose a 60 percent tariff on goods from China, these are squarely in our strike zone and we would be remiss not to explore the policies and their impact in detail.
And so I thought I would share some of my remarks from our edit meeting on Wednesday morning, when we woke up to find out that the world had indeed changed. I think it’s helpful for our audience if we are transparent and upfront about how we intend to operate, especially over the next several months that will likely be, well, chaotic.
This is a moment when our jobs are more important than ever. There will be so much noise and heat out there in the coming weeks and months, and maybe even years. The next six months in particular will be a confusing time for a lot of people. We should strive to be the signal in that noise.
We have extremely important stories to write about the role of science and technology in the new administration. There are obvious stories for us to take on in regards to climate, energy, vaccines, women’s health, IVF, food safety, chips, China, and I’m sure a lot more, that people are going to have all sorts of questions about. Let’s start by making a list of questions we have ourselves. Some of the people and technologies we cover will be ascendant in all sorts of ways. We should interrogate that power. It’s important that we take care in those stories not to be speculative or presumptive. To always have the facts buttoned up. To speak the truth and be unassailable in doing so.
Do we drop everything and only cover this? No. But it will certainly be a massive story that affects nearly all others.
This election will be a transformative moment for society and the world. Trump didn’t just win, he won a mandate. And he’s going to change the country and the global order as a result. The next few weeks will see so much speculation as to what it all means. So much fear, uncertainty, and doubt. There is an enormous amount of bullshit headed down the line. People will be hungry for sources they can trust. We should be there for that. Let’s leverage our credibility, not squander it.
We are not the resistance. We just want to tell the truth. So let’s take a breath, and then go out there and do our jobs.
I like to tell our reporters and editors that our coverage should be free from either hype or cynicism. I think that’s especially true now.
I’m also very interested to hear from our readers: What questions do you have? What are the policy changes or staffing decisions you are curious about? Please drop me a line at mat.honan@technologyreview.com I’m eager to hear from you.
If someone forwarded you this edition of The Debrief, you can subscribe here.
Every week I’ll talk to one of MIT Technology Review’s reporters or editors to find out more about what they’ve been working on. This week, I chatted with Melissa Heikkilä about her story on how ChatGPT search paves the way for AI agents.
Mat: Melissa, OpenAI rolled out web search for ChatGPT last week. It seems pretty cool. But you got at a really interesting bigger picture point about it paving the way for agents. What does that mean?
Melissa: Microsoft tried to chip away at Google’s search monopoly with Bing, and that didn’t really work. It’s unlikely OpenAI will be able to make much difference either. Their best bet is try to get users used to a new way of finding information and browsing the web through virtual assistants that can do complex tasks. Tech companies call these agents. ChatGPT’s usefulness is limited by the fact that it can’t access the internet and doesn’t have the most up to date information. By integrating a really powerful search engine into the chatbot, suddenly you have a tool that can help you plan things and find information in a far more comprehensive and immersive way than traditional search, and this is a key feature of the next generation of AI assistants.
Mat: What will agents be able to do?
Melissa: AI agents can complete complex tasks autonomously and the vision is that they will work as a human assistant would — book your flights, reschedule your meetings, help with research, you name it. But I wouldn’t get too excited yet. The cutting-edge of AI tech can retrieve information and generate stuff, but it still lacks the reasoning and long-term planning skills to be really useful. AI tools like ChatGPT and Claude also can’t interact with computer interfaces, like clicking at stuff, very well. They also need to become a lot more reliable and stop making stuff up, which is still a massive problem with AI. So we’re still a long way away from the vision becoming reality! I wrote an explainer on agents a little while ago with more details.
Mat: Is search as we know it going away? Are we just moving to a world of agents that not only answer questions but also accomplish tasks?
Melissa: It’s really hard to say. We are so used to using online search, and it’s surprisingly hard to change people’s behaviors. Unless agents become super reliable and powerful, I don’t think search is going to go away.
Mat: By the way, I know you are in the UK. Did you hear we had an election over here in the US?
Melissa: LOL
The Recommendation
I’m just back from a family vacation in New York City, where I was in town to run the marathon. (I get to point this out for like one or two more weeks before the bragging gets tedious, I think.) While there, we went to see The Outsiders. Chat, it was incredible. (Which maybe should go without saying given that it won the Tony for best musical.) But wow. I loved the book and the movie as a kid. But this hit me on an entirely other level. I’m not really a cries-at-movies (or especially at musicals) kind of person but I was wiping my eyes for much of the second act. So were very many people sitting around me. Anyway. If you’re in New York, or if it comes to your city, go see it. And until then, the soundtrack is pretty amazing on its own. (Here’s a great example.)
Kessel Okinga-Koumu paced around a crowded hallway. It was her first time presenting at the Deep Learning Indaba, she told the crowd gathered to hear her, filled with researchers from Africa’s machine-learning community. The annual weeklong conference (‘Indaba’ is a Zulu word for gathering), was held most recently in September at Amadou Mahtar Mbow University in Dakar, Senegal. It attracted over 700 attendees to hear about—and debate—the potential of Africa-centric AI and how it’s being deployed in agriculture, education, health care, and other critical sectors of the continent’s economy.
A 28-year-old computer science student at the University of the Western Cape in Cape Town, South Africa, Okinga-Koumu spoke about how she’s tackling a common problem: the lack of lab equipment at her university. Lecturers have long been forced to use chalkboards or printed 2D representations of equipment to simulate practical lessons that need microscopes, centrifuges, or other expensive tools. “In some cases, they even ask students to draw the equipment during practical lessons,” she lamented.
Okinga-Koumu pulled a phone from the pocket of her blue jeans and opened a prototype web app she’s built. Using VR and AI features, the app allows students to simulate using the necessary lab equipment—exploring 3D models of the tools in a real-world setting, like a classroom or lab. “Students could have detailed VR of lab equipment, making their hands-on experience more effective,” she said.
Established in 2017, the Deep Learning Indaba now has chapters in 47 of the 55 African nations and aims to boost AI development across the continent by providing training and resources to African AI researchers like Okinga-Koumu. Africa is still early in the process of adopting AI technologies, but organizers say the continent is uniquely hospitable to it for several reasons, including a relatively young and increasingly well-educated population, a rapidly growing ecosystem of AI startups, and lots of potential consumers.
“The building and ownership of AI solutions tailored to local contexts is crucial for equitable development,” says Shakir Mohamed, a senior research scientist at Google DeepMind and cofounder of the organization sponsoring the conference. Africa, more than other continents in the world, can address specific challenges with AI and will benefit immensely from its young talent, he says: “There is amazing expertise everywhere across the continent.”
However, researchers’ ambitious efforts to develop AI tools that answer the needs of Africans face numerous hurdles. The biggest are inadequate funding and poor infrastructure. Not only is it very expensive to build AI systems, but research to provide AI training data in original African languages has been hamstrung by poor financing of linguistics departments at many African universities and the fact that citizens increasingly don’t speak or write local languages themselves. Limited internet access and a scarcity of domestic data centers also mean that developers might not be able to deploy cutting-edge AI capabilities.
Complicating this further is a lack of overarching policies or strategies for harnessing AI’s immense benefits—and regulating its downsides. While there are various draft policy documents, researchers are in conflict over a continent-wide strategy. And they disagree about which policies would most benefit Africa, not the wealthy Western governments and corporations that have often funded technological innovation.
Taken together, researchers worry, these issues will hold Africa’s AI sector back and hamper its efforts to pave its own pathway in the global AI race.
On the cusp of change
Africa’s researchers are already making the most of generative AI’s impressive capabilities. In South Africa, for instance, to help address the HIV epidemic, scientists have designed an app called Your Choice, powered by an LLM-based chatbot that interacts with people to obtain their sexual history without stigma or discrimination. In Kenya, farmers are using AI apps to diagnose diseases in crops and increase productivity. And in Nigeria, Awarri, a newly minted AI startup, is trying to build the country’s first large language model, with the endorsement of the government, so that Nigerian languages can be integrated into AI tools.
The Deep Learning Indaba is another sign of how Africa’s AI research scene is starting to flourish. At the Dakar meeting, researchers presented 150 posters and 62 papers. Of those, 30 will be published in top-tier journals, according to Mohamed.
Meanwhile, an analysis of 1,646 publications in AI between 2013 and 2022 found “a significant increase in publications” from Africa. And Masakhane, a cousin organization to Deep Learning Indaba that pushes for natural-language-processing research in African languages, has released over 400 open-source models and 20 African-language data sets since it was founded in 2018.
“These metrics speak a lot to the capacity building that’s happening,” says Kathleen Siminyu, a computer scientist from Kenya, who researches NLP tools for her native Kiswahili. “We’re starting to see a critical mass of people having basic foundational skills. They then go on to specialize.”
She adds: “It’s like a wave that cannot be stopped.”
Khadija Ba, a Senegalese entrepreneur and investor at the pan-African VC fund P1 Ventures who was at this year’s conference, says that she sees African AI startups as particularly attractive because their local approaches have potential to be scaled for the global market. African startups often build solutions in the absence of robust infrastructure, yet “these innovations work efficiently, making them adaptable to other regions facing similar challenges,” she says.
In recent years, funding in Africa’s tech ecosystem has picked up: VC investment totaled $4.5 billion last year, more than double what it was just five years ago, according to a report by the African Private Capital Association. And this October, Google announced a $5.8 million commitment to support AI training initiatives in Kenya, Nigeria, and South Africa. But researchers say local funding remains sluggish. Take the Google-backed fund rolled out, also in October, in Nigeria, Africa’s most populous country. It will pay out $6,000 each to 10 AI startups—not even enough to purchase the equipment needed to power their systems.
Lilian Wanzare, a lecturer and NLP researcher at Maseno University in Kisumu, Kenya, bridles at African governments’ lackadaisical support for local AI initiatives and complains as well that the government charges exorbitant fees for access to publicly generated data, hindering data sharing and collaboration. “[We] researchers are just blocked,” she says. “The government is saying they’re willing to support us, but the structures have not been put in place for us.”
Language barriers
Researchers who want to make Africa-centric AI don’t face just insufficient local investment and inaccessible data. There are major linguistic challenges, too.
During one discussion at the Indaba, Ife Adebara, a Nigerian computational linguist, posed a question: “How many people can write a bachelor’s thesis in their native African language?”
Zero hands went up.
Then the audience disintegrated into laughter.
Africans want AI to speak their local languages, but many Africans cannot speak and write in these languages themselves, Adebara said.
Although Africa accounts for one-third of all languages in the world, many oral languages are slowly disappearing, their population of native speakers declining. And LLMs developed by Western-based tech companies fail to serve African languages; they don’t understand locally relevant context and culture.
For Adebara and others researching NLP tools, the lack of people who have the ability to read and write in African languages poses a major hurdle to development of bespoke AI-enabled technologies. “Without literacy in our local languages, the future of AI in Africa is not as bright as we think,” she says.
On top of all that, there’s little machine-readable data for African languages. One reason is that linguistic departments in public universities are poorly funded, Adebara says, limiting linguists’ participation in work that could create such data and benefit AI development.
This year, she and her colleagues established EqualyzAI, a for-profit company seeking to preserve African languages through digital technology. They have built voice tools and AI models, covering about 517 African languages.
Lelapa AI, a software company that’s building data sets and NLP tools for African languages, is also trying to address these language-specific challenges. Its cofounders met in 2017 at the first Deep Learning Indaba and launched the company in 2022. In 2023, it released its first AI tool, Vulavula, a speech-to-text program that recognizes several languages spoken in South Africa.
This year, Lelapa AI released InkubaLM, a first-of-its-kind small language model that currently supports a range of African languages: IsiXhosa, Yoruba, Swahili, IsiZulu, and Hausa. InkubaLM can answer questions and perform tasks like English translation and sentiment analysis. In tests, it performed as well as some larger models. But it’s still in early stages. The hope is that InkubaLM will someday power Vulavula, says Jade Abbott, cofounder and chief operating officer of Lelapa AI.
“It’s the first iteration of us really expressing our long-term vision of what we want, and where we see African AI in the future,” Abbott says. “What we’re really building is a small language model that punches above its weight.”
InkubaLM is trained on two open-source data sets with 1.9 billion tokens, built and curated by Masakhane and other African developers who worked with real people in local communities. They paid native speakers of languages to attend writing workshops to create data for their model.
Fundamentally, this approach will always be better, says Wanzare, because it’s informed by people who represent the language and culture.
A clash over strategy
Another issue that came up again and again at the Indaba was that Africa’s AI scene lacks the sort of regulation and support from governments that you find elsewhere in the world—in Europe, the US, China, and, increasingly, the Middle East.
Of the 55 African nations, only seven—Senegal, Egypt, Mauritius, Rwanda, Algeria, Nigeria, and Benin—have developed their own formal AI strategies. And many of those are still in the early stages.
A major point of tension at the Indaba, though, was the regulatory framework that will govern the approach to AI across the entire continent. In March, the African Union Development Agency published a white paper, developed over a three-year period, that lays out this strategy. The 200-page document includes recommendations for industry codes and practices, standards to assess and benchmark AI systems, and a blueprint of AI regulations for African nations to adopt. The hope is that it will be endorsed by the heads of African governments in February 2025 and eventually passed by the African Union.
But in July, the African Union Commission in Addis Ababa, Ethiopia, another African governing body that wields more power than the development agency, released a rival continental AI strategy—a 66-page document that diverges from the initial white paper.
It’s unclear what’s behind the second strategy, but Seydina Ndiaye, a program director at the Cheikh Hamidou Kane Digital University in Dakar who helped draft the development agency’s white paper, claims it was drafted by a tech lobbyist from Switzerland. The commission’s strategy calls for African Union member states to declare AI a national priority, promote AI startups, and develop regulatory frameworks to address safety and security challenges. But Ndiaye expressed concerns that the document does not reflect the perspectives, aspirations, knowledge, and work of grassroots African AI communities. “It’s a copy-paste of what’s going on outside the continent,” he says.
Vukosi Marivate, a computer scientist at the University of Pretoria in South Africa who helped found the Deep Learning Indaba and is known as an advocate for the African machine-learning movement, expressed fury over this turn of events at the conference. “These are things we shouldn’t accept,” he declared. The room full of data wonks, linguists, and international funders brimmed with frustration. But Marivate encouraged the group to forge ahead with building AI that benefits Africans: “We don’t have to wait for the rules to act right,” he said.
Barbara Glover, a program manager for the African Union Development Agency, acknowledges that AI researchers are angry and frustrated. There’s been a push to harmonize the two continental AI strategies, but she says the process has been fractious: “That engagement didn’t go as envisioned.” Her agency plans to keep its own version of the continental AI strategy, Glover says, adding that it was developed by African experts rather than outsiders. “We are capable, as Africans, of driving our own AI agenda,” she says.
This all speaks to a broader tension over foreign influence in the African AI scene, one that goes beyond any single strategic document. Mirroring the skepticism toward the African Union Commission strategy, critics say the Deep Learning Indaba is tainted by its reliance on funding from big foreign tech companies; roughly 50% of its $500,000 annual budget comes from international donors and the rest from corporations like Google DeepMind, Apple, Open AI, and Meta. They argue that this cash could pollute the Indaba’s activities and influence the topics and speakers chosen for discussion.
But Mohamed, the Indaba cofounder who is a researcher at Google DeepMind, says that “almost all that goes back to our beneficiaries across the continent,” and the organization helps connect them to training opportunities in tech companies. He says it benefits from some of its cofounders’ ties with these companies but that they do not set the agenda.
Ndiaye says that the funding is necessary to keep the conference going. “But we need to have more African governments involved,” he says.
To Timnit Gebru, founder and executive director at the nonprofit Distributed AI Research Institute (DAIR), which supports equitable AI research in Africa, the angst about foreign funding for AI development comes down to skepticism of exploitative, profit-driven international tech companies. “Africans [need] to do something different and not replicate the same issues we’re fighting against,” Gebru says. She warns about the pressure to adopt “AI for everything in Africa,” adding that there’s “a lot of push from international development organizations” to use AI as an “antidote” for all Africa’s challenges.
Siminyu, who is also a researcher at DAIR, agrees with that view. She hopes that African governments will fund and work with people in Africa to build AI tools that reach underrepresented communities—tools that can be used in positive ways and in a context that works for Africans. “We should be afforded the dignity of having AI tools in a way that others do,” she says.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Why AI could eat quantum computing’s lunch
Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics.
But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all. Read the full story.
—Edd Gent
What’s next for reproductive rights in the US
This week, it wasn’t just the future president of the US that was on the ballot. Ten states also voted on abortion rights.
Two years ago, the US Supreme Court overturned Roe v. Wade, a legal decision that protected the right to abortion. Since then, abortion bans have been enacted in multiple states, and millions of people in the US have lost access to local clinics.
Now, some states are voting to extend and protect access to abortion. Missouri, a state that has long restricted access, even voted to overturn its ban. But it’s not all good news for proponents of reproductive rights. Read the full story.
—Jessica Hamzelou
This story is from The Checkup, our weekly newsletter giving you the inside track on all things biotech. Sign up to receive it in your inbox every Thursday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Black Americans received racist texts threatening them with slavery Some of the messages claim to be from Trump supporters or the Trump administration. (WP $) + What Trump’s last tenure as president can teach us about what’s coming. (New Yorker $) + The January 6 rioters are hoping for early pardons and release. (Wired $)
2 China is shoring up its economy to the tune of $1.4 trillion It’s bracing itself for increased trade tensions with a Trump-governed US. (FT $) + The country’s chip industry has a plan too. (Reuters) + We’re witnessing the return of Trumponomics. (Economist $) + Here’s how the tech markets have reacted to his reelection. (Insider $)
3 How crypto came out on top Trump is all in, even if he previously dismissed it as a scam. (Bloomberg $) + Enthusiasts are hoping for less regulation and more favorable legislation. (Time $) 4 A weight-loss drug contributed to the death of a nurse in the UK Susan McGowan took two doses of Mounjaro in the weeks before her death. (BBC) + It’s the first known death to be officially linked to the drug in the UK. (The Guardian)
5 An academic’s lawsuit against Meta has been dismissed Ethan Zuckerman wanted protection against the firm for building an unfollowing tool. (NYT $)
6 How the Republicans won online The right-wing influencer ecosystem is extremely powerful and effective. (The Atlantic $) + The left doesn’t really have an equivalent network. (Vox) + X users are considering leaving the platform in protest (again.) (Slate $)
7 What does the future of America’s public health look like? Noted conspiracy theorist and anti-vaxxer RFK Jr could be in charge soon. (NY Mag $) + Letting Kennedy “go wild on health” is not a great sign. (Forbes $) + His war on fluoride in drinking water is already underway. (Politico)
8 An AI-created portrait of Alan Turing has sold for $1 million Just… why? (The Guardian) + Why artists are becoming less scared of AI. (MIT Technology Review)
9 How to harness energy from space A relay system of transmitters could help to ping it back to Earth. (IEEE Spectrum) + The quest to figure out farming on Mars. (MIT Technology Review)
10 AI-generated videos are not interesting That’s according to the arbiters of what is and isn’t interesting over at Reddit. (404 Media) + What’s next for generative video. (MIT Technology Review)
Quote of the day
“That’s petty, right? How much does one piece of fruit per day cost?”
—A former Intel employee reacts to the news the embattled company is planning to restore its free coffee privileges for its staff—but not free fruit, Insider reports.
The big story
Recapturing early internet whimsy with HTML
December 2023
Websites weren’t always slick digital experiences.
There was a time when surfing the web involved opening tabs that played music against your will and sifting through walls of text on a colored background. In the 2000s, before Squarespace and social media, websites were manifestations of individuality—built from scratch using HTML, by users who had some knowledge of code.
Scattered across the web are communities of programmers working to revive this seemingly outdated approach. And the movement is anything but a superficial appeal to retro aesthetics—it’s about celebrating the human touch in digital experiences. Read the full story.
—Tiffany Ng
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
Earlier this week, Americans cast their votes in a seminal presidential election. But it wasn’t just the future president of the US that was on the ballot. Ten states also voted on abortion rights.
Two years ago, the US Supreme Court overturned Roe v. Wade, a legal decision that protected the right to abortion. Since then, abortion bans have been enacted in multiple states, and millions of people in the US have lost access to local clinics.
Now, some states are voting to extend and protect access to abortion. This week, seven states voted in support of such measures. And voters in Missouri, a state that has long restricted access, have voted to overturn its ban.
It’s not all good news for proponents of reproductive rights—some states voted against abortion access. And questions remain over the impact of a second term under former president Donald Trump, who is set to return to the post in January.
Roe v. Wade, the legal decision that enshrined a constitutional right to abortion in the US in 1973, guaranteed the right to an abortion up to the point of fetal viability, which is generally considered to be around 24 weeks of pregnancy. It was overturned by the US Supreme Court in the summer of 2022.
Within 100 days of the decision, 13 states had enacted total bans on abortion from the moment of conception. Clinics in these states could no longer offer abortions. Other states also restricted abortion access. In that 100-day period, 66 of the 79 clinics across 15 states stopped offering abortion services, and 26 closed completely, according to research by the Guttmacher Institute.
The political backlash to the decision was intense. This week, abortion was on the ballot in 10 states: Arizona, Colorado, Florida, Maryland, Missouri, Montana, Nebraska, Nevada, New York, and South Dakota. And seven of them voted in support of abortion access.
Missouri was the first state to enact an abortion ban once Roe v. Wade was overturned. The state’s current Right to Life of the Unborn Child Act prohibits doctors from performing abortions unless there is a medical emergency. It has no exceptions for rape or incest. This week, the state voted to overturn that ban and protect access to abortion up to fetal viability.
Not all states voted in support of reproductive rights. Amendments to expand access failed to garner enough support in Nebraska, South Dakota, and Florida. In Florida, for example, where abortions after six weeks of pregnancy are banned, an amendment to protect access until fetal viability got 57% of the vote, falling just short of the 60% the state required for it to pass.
It’s hard to predict how reproductive rights will fare over the course of a second Trump term. Trump himself has been inconsistent on the issue. During his first term, he installed members of the Supreme Court who helped overturn Roe v. Wade. During his most recent campaign he said that decisions on reproductive rights should be left to individual states.
Trump, himself a Florida resident, has refused to comment on how he voted in the state’s recent ballot question on abortion rights. When asked, he said that the reporter who posed the question “should just stop talking about that,” according to the Associated Press.
State decisions can affect reproductive rights beyond abortion access. Just look at Alabama. In February, the Alabama Supreme Court ruled that frozen embryos can be considered children under state law. Embryos are routinely cryopreserved in the course of in vitro fertilization treatment, and the ruling was considered likely to significantly restrict access to IVF in the state. (In March, the state passed another law protecting clinics from legal repercussions should they damage or destroy embryos during IVF procedures, but the status of embryos remains unchanged.)
Whatever is in store for reproductive rights in the US in the coming months and years, all we’ve seen so far suggests that it’s likely to be a bumpy ride.
Now read the rest of The Checkup
Read more from MIT Technology Review’s archive
My colleague Rhiannon Williams reported on the immediate aftermath of the decision that reversed Roe v. Wadewhen it was announced a couple of years ago.
The Alabama Supreme Court ruling on embryos could also affect the development of technologies designed to serve as “artificial wombs,” as Antonio Regalado explained at the time.
We’ve also reported on attempts to create embryo-like structures using stem cells. These structures look like embryos but are created without eggs or sperm. There’s a “wild race” afoot to make these more like the real thing. But both scientific and ethical questions remain over how far we can—and—should go.
My colleagues have been exploring what the US election outcome might mean for climate policies. Senior climate editor James Temple writes that Trump’s victory is “a stunning setback for climate change.” And senior reporter Casey Crownhart explains how efforts including a trio of laws implemented by the Biden administration, which massively increased climate funding, could be undone.
From around the web
Donald Trump has said he’ll let Robert F. Kennedy Jr. “go wild on health.” Here’s where the former environmental lawyer and independent candidate—who has no medical or public health degrees—stands on vaccines, fluoride, and the Affordable Care Act. (New York Times)
And, in case you need it, here’s some lighter reading:
Scientists are sequencing the DNA of tiny marine plankton for the first time. (Come for the story of the scientific expedition; stay for the beautiful images of jellies and sea sapphires.) (The Guardian)
Dolphins are known to communicate with whistles and clicks. But scientists were surprised to find a “highly vocal” solitary dolphin in the Baltic Sea. They think the animal is engaging in “dolphin self-talk.” (Bioacoustics)
How much do you know about baby animals? Test your knowledge in this quiz. (National Geographic)
Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics.
Those expectations have been especially high in physics and chemistry, where the weird effects of quantum mechanics come into play. In theory, this is where quantum computers could have a huge advantage over conventional machines.
But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all.
The scale and complexity of quantum systems that can be simulated using AI is advancing rapidly, says Giuseppe Carleo, a professor of computational physics at the Swiss Federal Institute of Technology (EPFL). Last month, he coauthored a paper published in Science showing that neural-network-based approaches are rapidly becoming the leading technique for modeling materials with strong quantum properties. Meta also recently unveiled an AI model trained on a massive new data set of materials that has jumped to the top of a leaderboard for machine-learning approaches to material discovery.
Given the pace of recent advances, a growing number of researchers are now asking whether AI could solve a substantial chunk of the most interesting problems in chemistry and materials science before large-scale quantum computers become a reality.
“The existence of these new contenders in machine learning is a serious hit to the potential applications of quantum computers,” says Carleo “In my opinion, these companies will find out sooner or later that their investments are not justified.”
Exponential problems
The promise of quantum computers lies in their potential to carry out certain calculations much faster than conventional computers. Realizing this promise will require much larger quantum processors than we have today. The biggest devices have just crossed the thousand-qubit mark, but achieving an undeniable advantage over classical computers will likely require tens of thousands, if not millions. Once that hardware is available, though, a handful of quantum algorithms, like the encryption-cracking Shor’s algorithm, have the potential to solve problems exponentially faster than classical algorithms can.
But for many quantum algorithms with more obvious commercial applications, like searching databases, solving optimization problems, or powering AI, the speed advantage is more modest. And last year, a paper coauthored by Microsoft’s head of quantum computing, Matthias Troyer, showed that these theoretical advantages disappear if you account for the fact that quantum hardware operates orders of magnitude slower than modern computer chips. The difficulty of getting large amounts of classical data in and out of a quantum computer is also a major barrier.
So Troyer and his colleagues concluded that quantum computers should instead focus on problems in chemistry and materials science that require simulation of systems where quantum effects dominate. A computer that operates along the same quantum principles as these systems should, in theory, have a natural advantage here. In fact, this has been a driving idea behind quantum computing ever since the renowned physicist Richard Feynman first proposed the idea.
The rules of quantum mechanics govern many things with huge practical and commercial value, like proteins, drugs, and materials. Their properties are determined by the interactions of their constituent particles, in particular their electrons—and simulating these interactions in a computer should make it possible to predict what kinds of characteristics a molecule will exhibit. This could prove invaluable for discovering things like new medicines or more efficient battery chemistries, for example.
But the intuition-defying rules of quantum mechanics—in particular, the phenomenon of entanglement, which allows the quantum states of distant particles to become intrinsically linked—can make these interactions incredibly complex. Precisely tracking them requires complicated math that gets exponentially tougher the more particles are involved. That can make simulating large quantum systems intractable on classical machines.
This is where quantum computers could shine. Because they also operate on quantum principles, they are able to represent quantum states much more efficiently than is possible on classical machines. They could also take advantage of quantum effects to speed up their calculations.
But not all quantum systems are the same. Their complexity is determined by the extent to which their particles interact, or correlate, with each other. In systems where these interactions are strong, tracking all these relationships can quickly explode the number of calculations required to model the system. But in most that are of practical interest to chemists and materials scientists, correlation is weak, says Carleo. That means their particles don’t affect each other’s behavior significantly, which makes the systems far simpler to model.
The upshot, says Carleo, is that quantum computers are unlikely to provide any advantage for most problems in chemistry and materials science. Classical tools that can accurately model weakly correlated systems already exist, the most prominent being density functional theory (DFT). The insight behind DFT is that all you need to understand a system’s key properties is its electron density, a measure of how its electrons are distributed in space. This makes for much simpler computation but can still provide accurate results for weakly correlated systems.
Simulating large systems using these approaches requires considerable computing power. But in recent years there’s been an explosion of research using DFT to generate data on chemicals, biomolecules, and materials—data that can be used to train neural networks. These AI models learn patterns in the data that allow them to predict what properties a particular chemical structure is likely to have, but they are orders of magnitude cheaper to run than conventional DFT calculations.
This has dramatically expanded the size of systems that can be modeled—to as many as 100,000 atoms at a time—and how long simulations can run, says Alexandre Tkatchenko, a physics professor at the University of Luxembourg. “It’s wonderful. You can really do most of chemistry,” he says.
Olexandr Isayev, a chemistry professor at Carnegie Mellon University, says these techniques are already being widely applied by companies in chemistry and life sciences. And for researchers, previously out of reach problems such as optimizing chemical reactions, developing new battery materials, and understanding protein binding are finally becoming tractable.
As with most AI applications, the biggest bottleneck is data, says Isayev. Meta’s recently released materials data set was made up of DFT calculations on 118 million molecules. A model trained on this data achieved state-of-the-art performance, but creating the training material took vast computing resources, well beyond what’s accessible to most research teams. That means fulfilling the full promise of this approach will require massive investment.
Modeling a weakly correlated system using DFT is not an exponentially scaling problem, though. This suggests that with more data and computing resources, AI-based classical approaches could simulate even the largest of these systems, says Tkatchenko. Given that quantum computers powerful enough to compete are likely still decades away, he adds, AI’s current trajectory suggests it could reach important milestones, such as precisely simulating how drugs bind to a protein, much sooner.
Strong correlations
When it comes to simulating strongly correlated quantum systems—ones whose particles interact a lot—methods like DFT quickly run out of steam. While more exotic, these systems include materials with potentially transformative capabilities, like high-temperature superconductivity or ultra-precise sensing. But even here, AI is making significant strides.
In 2017, EPFL’s Carleo and Microsoft’s Troyer published a seminal paper in Scienceshowing that neural networks could model strongly correlated quantum systems. The approach doesn’t learn from data in the classical sense. Instead, Carleo says, it is similar to DeepMind’s AlphaZero model, which mastered the games of Go, chess, and shogi using nothing more than the rules of each game and the ability to play itself.
In this case, the rules of the game are provided by Schrödinger’s equation, which can precisely describe a system’s quantum state, or wave function. The model plays against itself by arranging particles in a certain configuration and then measuring the system’s energy level. The goal is to reach the lowest energy configuration (known as the ground state), which determines the system’s properties. The model repeats this process until energy levels stop falling, indicating that the ground state—or something close to it—has been reached.
The power of these models is their ability to compress information, says Carleo. “The wave function is a very complicated mathematical object,” he says. “What has been shown by several papers now is that [the neural network] is able to capture the complexity of this object in a way that can be handled by a classical machine.”
Since the 2017 paper, the approach has been extended to a wide range of strongly correlated systems, says Carleo, and results have been impressive. The Science paper he published with colleagues last month put leading classical simulation techniques to the test on a variety of tricky quantum simulation problems, with the goal of creating a benchmark to judge advances in both classical and quantum approaches.
Carleo says that neural-network-based techniques are now the best approach for simulating many of the most complex quantum systems they tested. “Machine learning is really taking the lead in many of these problems,” he says.
These techniques are catching the eye of some big players in the tech industry. In August, researchers at DeepMind showed in a paper in Science that they could accurately model excited states in quantum systems, which could one day help predict the behavior of things like solar cells, sensors, and lasers. Scientists at Microsoft Research have also developed an open-source software suite to help more researchers use neural networks for simulation.
One of the main advantages of the approach is that it piggybacks on massive investments in AI software and hardware, says Filippo Vicentini, a professor of AI and condensed-matter physics at École Polytechnique in France, who was also a coauthor on the Science benchmarking paper: “Being able to leverage these kinds of technological advancements gives us a huge edge.”
There is a caveat: Because the ground states are effectively found through trial and error rather than explicit calculations, they are only approximations. But this is also why the approach could make progress on what has looked like an intractable problem, says Juan Carrasquilla, a researcher at ETH Zurich, and another coauthor on the Science benchmarking paper.
If you want to precisely track all the interactions in a strongly correlated system, the number of calculations you need to do rises exponentially with the system’s size. But if you’re happy with an answer that is just good enough, there’s plenty of scope for taking shortcuts.
“Perhaps there’s no hope to capture it exactly,” says Carrasquilla. “But there’s hope to capture enough information that we capture all the aspects that physicists care about. And if we do that, it’s basically indistinguishable from a true solution.”
And while strongly correlated systems are generally too hard to simulate classically, there are notable instances where this isn’t the case. That includes some systems that are relevant for modeling high-temperature superconductors, according to a 2023 paper in Nature Communications.
“Because of the exponential complexity, you can always find problems for which you can’t find a shortcut,” says Frank Noe, research manager at Microsoft Research, who has led much of the company’s work in this area. “But I think the number of systems for which you can’t find a good shortcut will just become much smaller.”
No magic bullets
However, Stefanie Czischek, an assistant professor of physics at the University of Ottawa, says it can be hard to predict what problems neural networks can feasibly solve. For some complex systems they do incredibly well, but then on other seemingly simple ones, computational costs balloon unexpectedly. “We don’t really know their limitations,” she says. “No one really knows yet what are the conditions that make it hard to represent systems using these neural networks.”
Meanwhile, there have also been significant advances in other classical quantum simulation techniques, says Antoine Georges, director of the Center for Computational Quantum Physics at the Flatiron Institute in New York, who also contributed to the recent Science benchmarking paper. “They are all successful in their own right, and they are also very complementary,” he says. “So I don’t think these machine-learning methods are just going to completely put all the other methods out of business.”
Quantum computers will also have their niche, says Martin Roetteler, senior director of quantum solutions at IonQ, which is developing quantum computers built from trapped ions. While he agrees that classical approaches will likely be sufficient for simulating weakly correlated systems, he’s confident that some large, strongly correlated systems will be beyond their reach. “The exponential is going to bite you,” he says. “There are cases with strongly correlated systems that we cannot treat classically. I’m strongly convinced that that’s the case.”
In contrast, he says, a future fault-tolerant quantum computer with many more qubits than today’s devices will be able to simulate such systems. This could help find new catalysts or improve understanding of metabolic processes in the body—an area of interest to the pharmaceutical industry.
Neural networks are likely to increase the scope of problems that can be solved, says Jay Gambetta, who leads IBM’s quantum computing efforts, but he’s unconvinced they’ll solve the hardest challenges businesses are interested in.
“That’s why many different companies that essentially have chemistry as their requirement are still investigating quantum—because they know exactly where these approximation methods break down,” he says.
Gambetta also rejects the idea that the technologies are rivals. He says the future of computing is likely to involve a hybrid of the two approaches, with quantum and classical subroutines working together to solve problems. “I don’t think they’re in competition. I think they actually add to each other,” he says.
But Scott Aaronson, who directs the Quantum Information Center at the University of Texas, says machine-learning approaches are directly competing against quantum computers in areas like quantum chemistry and condensed-matter physics. He predicts that a combination of machine learning and quantum simulations will outperform purely classical approaches in many cases, but that won’t become clear until larger, more reliable quantum computers are available.
“From the very beginning, I’ve treated quantum computing as first and foremost a scientific quest, with any industrial applications as icing on the cake,” he says. “So if quantum simulation turns out to beat classical machine learning only rarely, I won’t be quite as crestfallen as some of my colleagues.”
One area where quantum computers look likely to have a clear advantage is in simulating how complex quantum systems evolve over time, says EPFL’s Carleo. This could provide invaluable insights for scientists in fields like statistical mechanics and high-energy physics, but it seems unlikely to lead to practical uses in the near term. “These are more niche applications that, in my opinion, do not justify the massive investments and the massive hype,” Carleo adds.
Nonetheless, the experts MIT Technology Review spoke to said a lack of commercial applications is not a reason to stop pursuing quantum computing, which could lead to fundamental scientific breakthroughs in the long run.
“Science is like a set of nested boxes—you solve one problem and you find five other problems,” says Vicentini. “The complexity of the things we study will increase over time, so we will always need more powerful tools.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Trump’s win is a tragic loss for climate progress
—James Temple
Donald Trump’s decisive victory is a stunning setback for the fight against climate change.
The Republican president-elect’s return to the White House means the US is going to squander precious momentum, unraveling hard-won policy progress that was just beginning to pay off, all for the second time in less than a decade.
It comes at a moment when the world can’t afford to waste time, with nations far off track from any emissions trajectories that would keep our ecosystems stable and our communities safe.
Trump could push the globe into even more dangerous terrain, by defanging President Joe Biden’s signature climate laws, exacerbating the dangers of heat waves, floods, wildfires, droughts, and famine and increase deaths and disease from air pollution. And this time round, I fear it will be far worse. Read the full story.
The US is about to make a sharp turn on climate policy
The past four years have seen the US take climate action seriously, working with the international community and pumping money into solutions. Now, we’re facing a period where things are going to be very different. This is what the next four years will mean for the climate fight. Read the full story.
—Casey Crownhart
This story is from The Spark, a newsletter we send out every Wednesday. If you want to stay up-to-date with all the latest goings-on in climate and energy, sign up.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Tech leaders are lining up to congratulate Donald Trump In a bid to placate the famously volatile President-elect. (FT $) + Many are seeking to rebuild bridges that have fractured since his last tenure. (CNBC) + Particularly Jeff Bezos, who has had a fractious relationship with Trump. (NY Mag $) + Expect less regulation, more trade upheaval, and a whole lot more Elon Musk. (WP $)
2 Election deniers have gone mysteriously silent It’s almost as if their claims of fraud were baseless in the first place. (NYT $) + It looks like influencer marketing campaigns really did change minds. (Wired $)
3 How Elon Musk is likely to slash US government spending He has a long history of strategic cost-cutting in his own businesses. (WSJ $) + His other ventures are on course for favorable government treatment. (Reuters) + It’s easy to forget that Musk claims to have voted Democrat in 2020 and 2016. (WP $)
4 Google could be spared being broken up Trump has expressed skepticism about the antitrust proposal. (Reuters) + It’s far from the only reverse-ferret we’re likely to see. (Economist $)
5 How progressive groups are planning for a future under Trump Alliances are meeting today to form networks of resources. (Fast Company $)
6 Australia wants to ban under-16s from accessing social media But it’s not clear how it could be enforced. (The Guardian) + The proposed law could come into power as soon as next year. (BBC) + Roblox has made sweeping changes to its child safety policies. (Bloomberg $) + Child online safety laws will actually hurt kids, critics say. (MIT Technology Review)
7 It looks like OpenAI just paid $10 million for a url Why ChatGPT when you could just chat.com? (The Verge) + How ChatGPT search paves the way for AI agents. (MIT Technology Review)
8 Women in the US are exploring swearing off men altogether Social media interest in a Korean movement advocating for a man-free life is soaring. (WP $)
9 Gen Z can’t get enough of manifesting TikTok is teaching them how to will their way to a better life. (Insider $)
10 Tattoo artists are divided over whether they should use AI AI-assisted designs have been accused of lacking soul. (WSJ $)
Quote of the day
“Don’t worry, I won’t judge — much. Maybe just an eye roll here and there.”
—Lily, a sarcastic AI teenage avatar and star of language learning app Duolingo, greets analysts tuning into the company’s earning call, Insider reports.
The big story
The great commercial takeover of low-Earth orbit
April 2024
NASA designed the International Space Station to fly for 20 years. It has lasted six years longer than that, though it is showing its age, and NASA is currently studying how to safely destroy the space laboratory by around 2030.
The ISS never really became what some had hoped: a launching point for an expanding human presence in the solar system. But it did enable fundamental research on materials and medicine, and it helped us start to understand how space affects the human body.
To build on that work, NASA has partnered with private companies to develop new, commercial space stations for research, manufacturing, and tourism. If they are successful, these companies will bring about a new era of space exploration: private rockets flying to private destinations. They’re already planning to do it around the moon. One day, Mars could follow. Read the full story.
—David W. Brown
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Who doesn’t love a smeared makeup look? + Time to snuggle up: it’s officially Nora Ephron season. + Walking backwards—don’t knock it ‘til you’ve tried it. It’s surprisingly good for you. + Feeling stressed? Here’s how to calm your mind in times of trouble.
Donald Trump’s decisive victory is a stunning setback for the fight against climate change.
The Republican president-elect’s return to the White House means the US is going to squander precious momentum, unraveling hard-won policy progress that was just beginning to pay off, all for the second time in less than a decade.
It comes at a moment when the world can’t afford to waste time, with nations far off track from any emissions trajectories that would keep our ecosystems stable and our communities safe. Under the policies in place today, the planet is already set to warm by more than 3 °C over preindustrial levels in the coming decades.
Trump could push the globe into even more dangerous terrain, by defanging President Joe Biden’s signature climate laws. In fact, a second Trump administration could boost greenhouse-gas emissions by 4 billion tons through 2030 alone, according to an earlier analysis by Carbon Brief, a well-regarded climate news and data site. That will exacerbate the dangers of heat waves, floods, wildfires, droughts, and famine and increase deaths and disease from air pollution, inflicting some $900 million in climate damages around the world, Carbon Brief found.
I started as the climate editor at MIT Technology Review just as Trump came into office the last time. Much of the early job entailed covering his systematic unraveling of the modest climate policy and progress that President Barack Obama had managed to achieve. I fear it will be far worse this time, as Trump ambles into office feeling empowered and aggrieved, and ready to test the rule of law and crack down on dissent.
This time his administration will be staffed all the more by loyalists and idealogues, who have already made plans to force civil servants with expertise and experience from federal agencies including the Environmental Protection Agency. He’ll be backed by a Supreme Court that he moved well to the right, and which has already undercut landmark environmental doctrines and weakened federal regulatory agencies.
This time the setbacks will sting more, too, because the US did finally manage to pass real, substantive climate policy, through the slimmest of congressional margins. The Inflation Reduction Act and Bipartisan Infrastructure Law allocated massive amounts of government funding to accelerating the shift to low-emissions industries and rebuilding the US manufacturing base around a clean-energy economy.
Trump has made clear he will strive to repeal as many of these provisions as he can, tempered perhaps only by Republicans who recognize that these laws are producing revenue and jobs in their districts. Meanwhile, throughout the prolonged presidential campaign, Trump or his surrogates pledged to boost oil and gas production, eliminate federal support for electric vehicles, end pollution rules for power plants, and remove the US from the Paris climate agreement yet again. Each of those goals stands in direct opposition to the deep, rapid emissions cuts now necessary to prevent the planet from tipping past higher and higher temperature thresholds.
Project 2025, considered a blueprint for the early days of a second Trump administration despite his insistence to the contrary, calls for dismantling or downsizing federal institutions including the the National Oceanic and Atmospheric Administration and the Federal Emergency Management Agency. That could cripple the nation’s ability to forecast, track, or respond to storms, floods, and fires like those that have devastated communities in recent months.
Observers I’ve spoken to fear that the Trump administration will also return the Department of Energy, which under Biden had evolved its mission toward developing low-emissions technologies, to the primary task of helping companies dig up more fossil fuels.
The US election could create global ripples as well, and very soon. US negotiators will meet with their counterparts at the annual UN climate conference that kicks off next week. With Trump set to move back into the White House in January, they will have little credibility or leverage to nudge other nations to step up their commitments to reducing emissions.
But those are just some of the direct ways that a second Trump administration will enfeeble the nation’s ability to drive down emissions and counter the growing dangers of climate change. He also has considerable power to stall the economy and sow international chaos amid escalating conflicts in Europe and the Middle East.
Trump’s eagerness to enact tariffs, slash government spending, and deport major portions of the workforce may stunt growth, drive up inflation, and chill investment. All that would make it far more difficult for companies to raise the capital and purchase the components needed to build anything in the US, whether that means wind turbines, solar farms, and seawalls or buildings, bridges, and data centers.
His clumsy handling of the economy and international affairs may also help China extend its dominance in producing and selling the components that are crucial to the energy transition, including batteries, EVs, and solar panels, to customers around the globe.
If one job of a commentator is to find some perspective in difficult moments, I admit I’m mostly failing in this one.
The best I can do is to say that there will be some meaningful lines of defense. For now, at least, state leaders and legislatures can continue to pass and implement stronger climate rules. Other nations could step up their efforts to cut emissions and assert themselves as global leaders on climate.
Private industry will likely continue to invest in and build businesses in climate tech and clean energy, since solar, wind, batteries, and EVs have proved themselves as competitive industries. And technological progress can occur no matter who is sitting in the round room on Pennsylvania Avenue, since researchers continue striving to develop cleaner, cheaper ways of producing our energy, food, and goods.
By any measure, the job of addressing climate change is now much harder. Nothing, however, has changed about the stakes.
Our world doesn’t end if we surpass 2 °C, 2.5 °C, or even 3 °C, but it will steadily become a more dangerous and erratic place. Every tenth of a degree remains worth fighting for—whether two, four, or a dozen years from now—because every bit of warming that nations pull together to prevent eases future suffering somewhere.
So as the shock wears off and the despair begins to lift, the core task before us remains the same: to push for progress, whenever, wherever, and however we can.
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Voters have elected Donald Trump to a second term in the White House.
In the days leading up to the election, I kept thinking about what four years means for climate change right now. We’re at a critical moment that requires decisive action to rapidly slash greenhouse-gas emissions from power plants, transportation, industry, and the rest of the economy if we’re going to achieve our climate goals.
The past four years have seen the US take climate action seriously, working with the international community and pumping money into solutions. Now, we’re facing a period where things are going to be very different. A Trump presidency will have impacts far beyond climate, but for the sake of this newsletter, we’ll stay focused on what four years means in the climate fight as we start to make sense of this next chapter.
Joe Biden arguably did more to combat climate change than any other American president. One of his first actions in office was rejoining the Paris climate accord—Trump pulled out of the international agreement to fight climate change during his first term in office. Biden then quickly set a new national goal to cut US carbon emissions in half, relative to their peak, by 2030.
The Environmental Protection Agency rolled out rules for power plants to slash pollution that harms both human health and the climate. The agency also announced new regulations for vehicle emissions to push the country toward EVs.
And the cornerstone of the Biden years has been unprecedented climate investment. A trio of laws—the Bipartisan Infrastructure Law, the CHIPS and Science Act, and the Inflation Reduction Act—pumped hundreds of billions of dollars into infrastructure and research, much of it on climate.
We can expect to see a dramatic shift in how the US talks about climate on the international stage. Trump has vowed to once again withdraw from the Paris agreement. Things are going to be weird at the annual global climate talks that kick off next week.
We can also expect to see efforts to undo some of Biden’s key climate actions, most centrally the Inflation Reduction Act, as my colleague James Temple covered earlier this year.
What, exactly, Trump can do will depend on whether Republicans take control of both houses of Congress. A clean sweep would open up more lanes for targeting legislation passed under Biden. (As of sending this email, Republicans have secured enough seats to control the Senate, but the House is uncertain and could be for days or even weeks.)
I don’t think the rug will be entirely pulled out from under the IRA—portions of the investment from the law are beginning to pay off, and the majority of the money has gone to Republican districts. But there will certainly be challenges to pieces, especially the EV tax credits, which Trump has been laser-focused on during the campaign.
This all adds up to a very different course on climate than what many had hoped we might see for the rest of this decade.
A Trump presidency could add 4 billion metric tons of carbon dioxide emissions to the atmosphere by 2030 over what was expected from a second Biden term, according to an analysis published in April by the website Carbon Brief (this was before Biden dropped out of the race). That projection sees emissions under Trump dropping by 28% below the peak by the end of the decade—nowhere near the 50% target set by Biden at the beginning of his term.
The US, which is currently the world’s second-largest greenhouse-gas emitter and has added more climate pollution to the atmosphere than any other nation, is now very unlikely to hit Biden’s 2030 goal. That’s basically the final nail in the coffin for efforts to limit global warming to 1.5 °C (2.7 °F) over preindustrial levels.
In the days, weeks, and years ahead we’ll be covering what this change will mean for efforts to combat climate change and to protect the most vulnerable from the dangerous world we’re marching toward—indeed, already living in. Stay tuned for more from us.
Now read the rest of The Spark
Related reading
Trump wants to unravel Biden’s landmark climate law. Read our coverage from earlier this year to see what’s most at risk.
It’s been two years since the Inflation Reduction Act was passed, ushering in hundreds of billions of dollars in climate investment. Read more about the key provisions in this newsletter from August.
Another thing
Jennifer Doudna, one of the inventors of the gene-editing tool CRISPR, says the tech could be a major tool to help address climate change and deal with the growing risks of our changing world.
The hope is that CRISPR’s ability to chop out specific pieces of DNA will make it faster and easier to produce climate-resilient crops and livestock, while avoiding the pitfalls of previous attempts to tweak the genomes of plants and animals. Read the full story from my colleague James Temple.
Keeping up with climate
Startup Redoxblox is building a technology that’s not exactly a thermal battery, but it’s not not a thermal battery either. The company raised just over $30 million to build its systems, which store energy in both heat and chemical bonds. (Heatmap)
It’s been a weird fall in the US Northeast—a rare drought has brought a string of wildfires, and New York City is seeing calls to conserve water. (New York Times)
It’s been bumpy skies this week for electric-plane startups. Beta Technologies raised over $300 million in funding, while Lilium may be filing for insolvency soon. (Canary Media)
→ The runway for futuristic electric planes is still a long one. (MIT Technology Review)
Meta’s plan to build a nuclear-powered AI data center has been derailed by a rare species of bee living on land earmarked for the project. (Financial Times)
The atmospheric concentration of methane—a powerful greenhouse gas—has been mysteriously climbing since 2007, and that growth nearly doubled in 2020. Now scientists may have finally figured out the culprits: microbes in wetlands that are getting warmer and wetter. (Washington Post)
Greenhouse-gas emissions from the European Union fell by 8% in 2023. The drop is thanks to efforts to shut down coal-fired power plants and generate more electricity from renewables like solar and wind. (The Guardian)
Four electric school buses could help officials figure out how to charge future bus fleets. A project in Brooklyn will aim to use onsite renewables and smart charging to control the costs and grid stress of EV charging depots. (Canary Media)
The world’s first barcode, designed in 1948, took more than 25 years to make it out of the lab and onto a retail package. Since then, the barcode has done much more than make grocery checkouts faster—it has remade our understanding of how physical objects can be identified and tracked, creating a new pace and set of expectations for the speed and reliability of modern commerce.
Nearly eighty years later, a new iteration of that technology, which encodes data in two dimensions, is poised to take the stage. Today’s 2D barcode is not only out of the lab but “open to a world of possibility,” says Carrie Wilkie, senior vice president of standards and technology at GS1 US.
2D barcodes encode substantially more information than their 1D counterparts. This enables them to link physical objects to a wide array of digital resources. For consumers, 2D barcodes can provide a wealth of product information, from food allergens, expiration dates, and safety recalls to detailed medication use instructions, coupons, and product offers. For businesses, 2D barcodes can enhance operational efficiencies, create traceability at the lot or item level, and drive new forms of customer engagement.
An array of 2D barcode types supports the information needs of a variety of industries. The GS1 DataMatrix, for example, is used on medication or medical devices, encoding expiration dates, batch and lot numbers, and FDA National Drug Codes. The QR Code is familiar to consumers who have used one to open a website from their phone. Adding a GS1 Digital Link URI to a QR Code enables it to serve two purposes: as both a traditional barcode for supply chain operations, enabling tracking throughout the supply chain and price lookup at checkout, and also as a consumer-facing link to digital information, like expiry dates and serial numbers.
Regardless of type, however, all 2D barcodes require a business ecosystem backed by data. To capture new value from advanced barcodes, organizations must supply and manage clean, accurate, and interoperable data around their products and materials. For 2D barcodes to deliver on their potential, businesses will need to collaborate with partners, suppliers, and customers and commit to common data standards across the value chain.
Driving the demand for 2D barcodes
Shifting to 2D barcodes—and enabling the data ecosystems behind them—will require investment by business. Consumer engagement, compliance, and sustainability are among the many factors driving this transition.
Real-time consumer engagement: Today’s customers want to feel connected to the brands they interact with and purchase from. Information is a key element of that engagement and empowerment. “When I think about customer satisfaction,” says Leslie Hand, group vice president for IDC Retail Insights, “I’m thinking about how I can provide more information that allows them to make better decisions about their own lives and the things they buy.”
2D barcodes can help by connecting consumers to online content in real time. “If, by using a 2D barcode, you have the capability to connect to a consumer in a specific region, or a specific store, and you have the ability to provide information to that consumer about the specific product in their hand, that can be a really powerful consumer engagement tool,” says Dan Hardy, director of customer operations for HanesBrands, Inc. “2D barcodes can bring brand and product connectivity directly to an individual consumer, and create an interaction that supports your brand message at an individual consumer/product level.”
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Life-seeking, ice-melting robots could punch through Europa’s icy shell
At long last, NASA’s Europa Clipper mission is on its way. It launched on October 14 and is now en route to its target: Jupiter’s ice-covered moon Europa, whose frozen shell almost certainly conceals a warm saltwater ocean. When the spacecraft gets there, it will conduct dozens of close flybys in order to determine what that ocean is like and, crucially, where it might be hospitable to life.
Europa Clipper is still years away from its destination—it is not slated to reach the Jupiter system until 2030. But that hasn’t stopped engineers and scientists from working on what would come next if the results are promising: a mission capable of finding evidence of life itself. Read the full story.
— Robin George Andrews
GMOs could reboot chestnut trees
Living as long as a thousand years, the American chestnut tree once dominated parts of the Eastern forest canopy, with many Native American nations relying on them for food. But by 1950, the tree had largely succumbed to a fungal blight probably introduced by Japanese chestnuts.
As recently as last year, it seemed the 35-year effort to revive the American chestnut might grind to a halt. Now, American Castanea, a new biotech startup, has created more than 2,500 transgenic chestnut seedlings— likely the first genetically modified trees to be considered for federal regulatory approval as a tool for ecological restoration. Read the full story.
—Anya Kamenetz
This piece is from the latest print issue of MIT Technology Review, which is all about the weird and wonderful world of food. If you don’t already, subscribe to receive future copies once they land.
MIT Technology Review Narrated: Why Congo’s most famous national park is betting big on crypto
In an attempt to protect its forests and famous wildlife, Virunga has become the first national park to run a Bitcoin mine. But some are wondering what crypto has to do with conservation.
This is our latest story to be turned into a MIT Technology Review Narrated podcast. In partnership with News Over Audio, we’ll be making a selection of our stories available, each one read by a professional voice actor. You’ll be able to listen to them on the go or download them to listen to offline.
We’re publishing a new story each week on Spotify and Apple Podcasts, including some taken from our most recent print magazine. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Donald Trump has won the US Presidential election He’s the first president with a criminal conviction and two impeachments under his belt. (WP $) + The crypto industry is rejoicing at the news as bitcoin leapt to a record high. (NYT $) + In fact, a blockchain entrepreneur won the Ohio Senate race. (CNBC) + What comes next is anyone’s guess. (The Atlantic $)
2 Trump’s victory is music to Elon Musk’s ears He’s been promised a new role as head of a new Department of Government Efficiency. (FT $) + Musk is being sued over his $1 million giveaways during the election campaign. (Reuters) + The billionaire used X as his own personal megaphone to stir up dissent. (The Atlantic $)
3 Abortion rights are now under further threat Particularly pills sent by mail. (Vox) + Trump’s approach to discussing abortion has been decidedly mixed. (Bloomberg $)
4 Trump could be TikTok’s last hope for survival in the US Now he’s stopped threatening to ban it, that is. (The Information $)
5 Perplexity is approaching a $9 billion valuation Thanks to the company’s fourth round of funding this year. (WSJ $)+ Microsoft has reportedly expressed interest in acquiring the AI search startup. (The Information $)
6 The iPhone could be Apple’s last major cash cow It’s acknowledged that its other devices may never reach the same heady heights. (FT $) + Nvidia has overtaken Apple as the world’s largest company. (Bloomberg $)
7 The Mozilla Foundation is getting rid of its advocacy division The team prioritized fighting for a free and open web. (TechCrunch)
8 China plans to slam a spacecraft into an asteroid Following in the footsteps of America’s successful 2022 mission. (Economist $) + Watch the moment NASA’s DART spacecraft crashed into an asteroid. (MIT Technology Review)
9 The Vatican’s anime mascot has been co opted into AI porn That didn’t take long. (404 Media)
10 Gigantic XXL TVs are the gift of the season It’s cheaper than ever to fit your home out with a jumbotron screen. (CNN)
Quote of the day
“This is what happens when you mess with the crypto army.”
—Crypto twin Cameron Winklevoss celebrates the victory of blockchain entrepreneur Bernie Moreno, new Senator-elect for Ohio, in a post on X.
The big story
How covid conspiracies led to an alarming resurgence in AIDS denialism
August 2024
Several million people were listening in February when Joe Rogan falsely declared that “party drugs” were an “important factor in AIDS.” His guest on The Joe Rogan Experience, the former evolutionary biology professor turned contrarian podcaster Bret Weinstein, agreed with him.
Speaking to the biggest podcast audience in the world, the two men were promoting dangerous and false ideas—ideas that were in fact debunked and thoroughly disproved decades ago.
These comments and others like them add up to a small but unmistakable resurgence in AIDS denialism—a false collection of theories arguing either that HIV doesn’t cause AIDS or that there’s no such thing as HIV at all.
These claims had largely fallen out of favor until the coronavirus arrived. But, following the pandemic, a renewed suspicion of public health figures and agencies is giving new life to ideas that had long ago been pushed to the margins. Read the full story.
—Anna Merlan
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Full Moon Matinee is an amazing crime drama resource on YouTube: complete with some excellent acting courtesy of its host. + This is your sign to pick a name and cheer on random strangers during a marathon. I guarantee you’ll make their day! + There’s no wrong way to bake a sweet potato, but some ways are better than others. + Are you a screen creeper? I know I am.
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What do jumping spiders find sexy? How DIY tech is offering insights into the animal mind.
Studying the minds of other animals comes with a challenge that human psychologists don’t usually face: Your subjects can’t tell you what they’re thinking.
To get answers from animals, scientists need to come up with creative experiments to learn why they behave the way they do. Sometimes this requires designing and building experimental equipment from scratch.
This piece is from the latest print issue of MIT Technology Review, which is all about the weird and wonderful world of food. If you don’t already, subscribe to receive future copies once they land.
How ChatGPT search paves the way for AI agents
It’s been a busy few weeks for OpenAI. Alongside updates to its new Realtime API platform, which will allow developers to build apps and voice assistants more quickly, it recently launched ChatGPT search, which allows users to search the internet using the chatbot.
Both developments pave the way for the next big thing in AI: agents. These AI assistants can complete complex chains of tasks, such as booking flights. OpenAI’s strategy is to both build agents itself and allow developers to use its software to build their own agents, and voice will play an important role in what agents will look and feel like.
Melissa Heikkilä, our senior AI reporter, sat down with Olivier Godement, OpenAI’s head of product for its platform, and Romain Huet, head of developer experience, last week to hear more about the two big hurdles that need to be overcome before agents can become a reality. Read the full story.
This story is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 America is heading to the polls Here’s how Harris and Trump will attempt to lead the US to tech supremacy. (The Information $) + The ‘Stop the Steal’ election denial movement is preparing to contest the vote. (WP $) + The muddy final polls suggest it’s still all to play for. (Vox)
2 Abortion rights are on the 2024 ballot A lack of access to basic health care has led to the deaths of at least four women. (NY Mag $) + Nine states will decide whether to guarantee their residents abortion access. (Fortune) + If Trump wins he could ban abortion nationwide, even without Congress. (Politico)
3 Inside New York’s election day wargames Tech, business and policy leaders gathered to thrash out potential risks. (WSJ $)+ Violence runs throughout all aspects of this election cycle. (FT $)
4 Elon Musk’s false and misleading X election posts have billions of views In fact, they’ve been viewed twice as much as all X’s political ads this year. (CNN) + Musk’s decision to hitch himself to Trump may end up backfiring, though. (FT $)
5 Meta will permit the US military to use its AI models It’s an interesting update to its previous policy, which explicitly banned its use for military purposes. (NYT $) + Facebook has kept a low profile during the election cycle. (The Atlantic $) + Inside the messy ethics of making war with machines. (MIT Technology Review)
6 The hidden danger of pirated software It’s not just viruses you should be worried about. (404 Media)
7 Apple is weighing up expanding into smart glasses Where Meta leads, Apple may follow. (Bloomberg $) + The coolest thing about smart glasses is not the AR. It’s the AI. (MIT Technology Review)
8 India’s lithium plans may have been a bit too ambitious Reports of a major lithium reserve appear to have been massively overblown.(Rest of World) + Some countries are ending support for EVs. Is it too soon? (MIT Technology Review)
9 Your air fryer could be surveilling you Household appliances are now mostly smart, and stuffed with trackers. (The Guardian)
10 How to stay sane during election week Focus on what you can control, and try to let go of what you can’t. (WP $) + Here’s how election gurus are planning to cope in the days ahead. (The Atlantic $) + How to log off. (MIT Technology Review)
Quote of the day
“We’re in kind of the ‘throw spaghetti at the wall’ moment of politics and AI, where this intersection allows people to try new things for propaganda.”
—Rachel Tobac, chief executive of ethical hacking company SocialProof Security, tells the Washington Post why a deepfake video of Martin Luther King endorsing Donald Trump is being shared online in the closing hours of the presidential race.
The big story
The hunter-gatherer groups at the heart of a microbiome gold rush
December 2023
Over the last couple of decades, scientists have come to realize just how important the microbes that crawl all over us are to our health. But some believe our microbiomes are in crisis—casualties of an increasingly sanitized way of life. Disturbances in the collections of microbes we host have been associated with a whole host of diseases, ranging from arthritis to Alzheimer’s.
Some might not be completely gone, though. Scientists believe many might still be hiding inside the intestines of people who don’t live in the polluted, processed environment that most of the rest of us share.
They’ve been studying the feces of people like the Yanomami, an Indigenous group in the Amazon, who appear to still have some of the microbes that other people have lost. But they’re having to navigate an ethical minefield in order to do so. Read the full story.
—Jessica Hamzelou
We can still have nice things
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)
+ Move over Moo Deng—Haggis the baby pygmy hippo is the latest internet star! + To celebrate the life of the late, great Quincy Jones, check out this sensational interview in which he spills the beans on everything from the Beatles’ musical shortcomings to who shot Kennedy. Thank you for the music, Quincy. + The color of the season? Sage green, apparently. + Dinosaurs are everywhere, you just need to look for them.
ChatGPT can now search the web for up-to-date answers to a user’s queries, OpenAI announced today.
Until now, ChatGPT was mostly restricted to generating answers from its training data, which is current up to October 2023 for GPT-4o, and had limited web search capabilities. Searches about generalized topics will still draw on this information from the model itself, but now ChatGPT will automatically search the web in response to queries about recent information such as sports, stocks, or news of the day, and can deliver rich multi-media results. Users can also manually trigger a web search, but for the most part, the chatbot will make its own decision about when an answer would benefit from information taken from the web, says Adam Fry, OpenAI’s product lead for search.
“Our goal is to make ChatGPT the smartest assistant, and now we’re really enhancing its capabilities in terms of what it has access to from the web,” Fry tells MIT Technology Review. The feature is available today for the chatbot’s paying users.
While ChatGPT search, as it is known, is initially available to paying customers, OpenAI intends to make it available for free later, even when people are logged out. The company also plans to combine search with its voice features and Canvas, its interactive platform for coding and writing, although these capabilities will not be available in today’s initial launch.
The company unveiled a standalone prototype of web search in July. Those capabilities are now built directly into the chatbot. OpenAI says it has “brought the best of the SearchGPT experience into ChatGPT.”
OpenAI is the latest tech company to debut an AI-powered search assistant, challenging similar tools from competitors such as Google, Microsoft, and startup Perplexity. Meta, too, is reportedly developing its own AI search engine. As with Perplexity’s interface, users of ChatGPT search can interact with the chatbot in natural language, and it will offer an AI-generated answer with sources and links to further reading. In contrast, Google’s AI Overviews offer a short AI-generated summary at the top of the website, as well as a traditional list of indexed links.
These new tools could eventually challenge Google’s 90% market share in online search. AI search is a very important way to draw more users, says Chirag Shah, a professor at the University of Washington, who specializes in online search. But he says it is unlikely to chip away at Google’s search dominance. Microsoft’s high-profile attempt with Bing barely made a dent in the market, Shah says.
Instead, OpenAI is trying to create a new market for more powerful and interactive AI agents, which can take complex actions in the real world, Shah says.
The new search function in ChatGPT is a step toward these agents.
It can also deliver highly contextualized responses that take advantage of chat histories, allowing users to go deeper in a search. Currently, ChatGPT search is able to recall conversation histories and continue the conversation with questions on the same topic.
ChatGPT itself can also remember things about users that it can use later —sometimes it does this automatically, or you can ask it to remember something. Those “long-term” memories affect how it responds to chats. Search doesn’t have this yet—a new web search starts from scratch— but it should get this capability in the “next couple of quarters,” says Fry. When it does, OpenAI says it will allow it to deliver far more personalized results based on what it knows.
“Those might be persistent memories, like ‘I’m a vegetarian,’ or it might be contextual, like ‘I’m going to New York in the next few days,’” says Fry. “If you say ‘I’m going to New York in four days,’ it can remember that fact and the nuance of that point,” he adds.
To help develop ChatGPT’s web search, OpenAI says it leveraged its partnerships with news organizations such as Reuters, the Atlantic, Le Monde, the Financial Times, Axel Springer, Condé Nast, and Time. However, its results include information not only from these publishers, but any other source online that does not actively block its search crawler.
It’s a positive development that ChatGPT will now be able to retrieve information from these reputable online sources and generate answers based on them, says Suzan Verberne, a professor of natural-language processing at Leiden University, who has studied information retrieval. It also allows users to ask follow-up questions.
But despite the enhanced ability to search the web and cross-check sources, the tool is not immune from the persistent tendency of AI language models to make things up or get it wrong. When MIT Technology Review tested the new search function and asked it for vacation destination ideas, ChatGPT suggested “luxury European destinations” such as Japan, Dubai, the Caribbean islands, Bali, the Seychelles, and Thailand. It offered as a source an article from the Times, a British newspaper, which listed these locations as well as those in Europe as luxury holiday options.
“Especially when you ask about untrue facts or events that never happened, the engine might still try to formulate a plausible response that is not necessarily correct,” says Verberne. There is also a risk that misinformation might seep into ChatGPT’s answers from the internet if the company has not filtered its sources well enough, she adds.
Another risk is that the current push to access the web through AI search will disrupt the internet’s digital economy, argues Benjamin Brooks, a fellow at Harvard University’s Berkman Klein Center, who previously led public policy for Stability AI, in an op-ed published by MIT Technology Review today.
“By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and ‘eyeballs’ they need to survive,” Brooks writes.