Reading view

There are new articles available, click to refresh the page.

Ray Kurzweil: Technology will let us fully realize our humanity

By the end of this decade, AI will likely surpass humans at all cognitive tasks, igniting the scientific revolution that futurists have long imagined. Digital scientists will have perfect memory of every research paper ever published and think a million times faster than we can. Our plodding progress in fields like robotics, nanotechnology, and genomics will become a sprint. Within the lifetimes of most people alive today, society will achieve radical material abundance, and medicine will conquer aging itself. But our destiny isn’t a hollow Jetsons future of gadgetry and pampered boredom. By freeing us from the struggle to meet the most basic needs, technology will serve our deepest human aspirations to learn, create, and connect.

This sounds fantastically utopian, but humans have made such a leap before. Our hunter-gatherer ancestors lived on a razor’s edge of precarity. Every winter was a battle with starvation. Violence or infection likely killed most people before age 30. The constant struggle to survive left little opportunity for invention or philosophy. But the discovery of agriculture afforded just enough stability to create a feedback loop. Material surplus meant some could develop skills that created even larger surpluses. In a blink of evolutionary time, civilization appeared—literature, law, science, engineering. While modern life can often feel like a rat race, to our paleolithic ancestors, we would seem to enjoy impossible abundance and freedom.

What will the next leap look like? One of the first shifts will be in learning. From the ancient Greeks through the Enlightenment, education’s primary goal was to nourish the mind and cultivate virtue. But the Industrial Revolution reframed education as training for economic success in an increasingly technical society. Today, kids are told from an early age that they have to study for instrumental reasons—to get into a good college, to get into a good grad school, to get a good job. All too often, this deadens their natural curiosity and love of learning.

As superhuman AI makes most goods and services so abundant as to be almost free, the need to structure our lives around jobs will fade away. We’ll then be free to learn for its own sake—nurturing the knowledge and wisdom that define our humanity. And learning itself will become vastly richer. Instead of just reading about Rome in dry text, you’ll be able to explore the Forum in virtual reality and debate an AI Cicero trained on the original’s speeches. Instead of getting lost in a crowded lecture hall, you’ll work one on one with a supremely patient digital tutor that’s been trained by the greatest teachers on Earth and knows exactly how you learn best. 

AI tools will also supercharge your creativity. Today, expressing your artistic impulses requires both technical skill and resources—for films and games, sometimes hundreds of millions of dollars. These bottlenecks keep countless brilliant ideas trapped in people’s heads, and we are all poorer for it. But systems like Midjourney and Sora let us glimpse a different future. You’ll be able to speak a painting into being like a muse in Rembrandt’s ear. Or you’ll hum a tune and then work with a digital Wagner to orchestrate it into a symphony.

Thanks to this creative revolution, the coming medical breakthroughs won’t just offer longer lives but fuller ones, enriched by all the art, music, literature, film, and games created by humanity during those extra years. Most important, you’ll share all this with the people you love most. Imagine being healthy as you watch your great-grandchildren grow into adults! And material abundance will ease economic pressures and afford families the quality time together they’ve long yearned for. 

This is the profound leap that awaits us—a future where our technological wonders don’t diminish our humanity but allow it to flourish.


Ray Kurzweil is a technologist and futurist and the author, most recently, of The Singularity Is Nearer: When We Merge with AI. The views represented here are his own.

What will AI mean for economic inequality?

Prominent AI researchers expect the arrival of artificial general intelligence anywhere between “the next couple of years” and “possibly never.” At the same time, leading economists disagree about the potential impact of AI: Some anticipate a future of perpetually accelerating productivity, while others project more modest gains. But most experts agree that technological advancement, however buoyant, is no guarantee that everyone benefits. 

And unfortunately, even though some of the most notable AI R&D efforts declare that making sure everyone benefits is a key goal or guiding principle, ensuring that AI helps create a more inclusive future remains one of the least invested-in areas of AI governance. This might seem natural given the state of the field: The impact AI will have on labor and inequality is still highly uncertain, making it difficult to design interventions. But we know at least some of the factors that will influence the interplay between AI and inequality over the next few decades. Paying attention to those can help us make the idea that AI will benefit everyone into more than just a pipe dream.

Because they’re largely driven by the private sector, AI development and use are heavily influenced by the incentive structures of the world’s economies. And if there is something important that can be predicted with reasonable certainty about those economies, it is their future demographic composition. There is a stark divide between higher-income countries, whose populations are aging rapidly and will shrink without migration, and low- and lower-middle income countries, which will continue to grow for the rest of the century thanks to the excess of births over deaths.

What does this have to do with AI? AI development is concentrated in the aging countries, and thus it will follow the path set by the realities, needs, and incentives in those places. Aging countries are seeing the ratio of working-age people to retirees collapse, making it more difficult to sustain pension schemes and contain health-care costs. Countries looking to maintain their retirees’ living standards and their overall economic dynamism will seek ways to expand their effective labor force, be that with humans or with artificial agents. Limited (and likely highly unpopular) gains could come from increasing the retirement age. More sizable gains could come from immigration. But keeping the ratio of the working-age to retiree populations constant would require a significant increase in immigration to the higher-income countries. Widespread anti-immigration sentiment makes that seem unlikely, though opinions could change relatively quickly when people are faced with the prospect of diminishing pensions and rising health-care costs.  

If overly restrictive immigration policies do not relax in rich countries, we will likely see the economic incentives to fill labor gaps with AI go into overdrive over the next few decades. It might seem on the surface that this won’t exacerbate inequality if there are fewer people than available jobs. But if the trend is associated with an uneven distribution of gains and losses, increasingly precarious employment, excessive surveillance of workers, and digitization of their know-how without adequate compensation, we should expect a spike in inequality. 

And even if the efforts to replace labor with AI unfold incredibly well for the populations of rich countries, they might dramatically deepen inequality between countries. For the rest of the 21st century, lower-income countries will continue to have young, growing populations in need not of labor-­replacing tech, but of gainful employment. The problem is that machines invented to fill in for missing workers in countries with labor shortages often quickly spread even to countries where unemployment is in the double digits and the majority of the working population is employed by unregistered informal businesses. That is how we find self-service kiosks in South African restaurants and Indian airports, replacing formal-­sector jobs in these and many more countries struggling to create enough of them. 

In such a world, many beneficial applications of AI could remain relatively underdeveloped compared with the merely labor-saving ones. For example, efforts to develop AI for climate-­change resilience, early prediction of natural disasters, or affordable personalized tutoring might end up taking a back seat to projects geared to cutting labor costs in retail, hospitality, and transportation. Deliberate, large-scale efforts by governments, development banks, and philanthropies will be needed to make sure AI is used to help address the needs of poorer countries, not only richer ones. The budgets for such efforts are currently quite small, leaving AI on its default path—which is far from inclusive. 

But default is not destiny. We could choose to channel more public R&D efforts toward pressing global challenges like accelerating the green transition and improving educational outcomes. We could invest more in creating and supporting AI development hubs in lower-income countries. Policy choices that allow for greater labor mobility would help create a more balanced distribution of the working-age population between countries and relieve the economic pressures that would drive commercial AI to displace jobs. If we do none of that, distorted incentives will continue to shape this powerful technology, leading to profound negative consequences not only for lower-income countries but for everyone. 


Katya Klinova is the head of data and AI at UN Global Pulse, the secretary-general’s innovation lab. The views represented here are her own.

Maybe you will be able to live past 122

The UK’s Office of National Statistics has an online life expectancy calculator. Enter your age and sex, and the website will, using national averages, spit out the age at which you can expect to pop your clogs. For me, that figure is coming out at 88 years old.

That’s not too bad, I figure, given that globally, life expectancy is around 73. But I’m also aware that this is a lowball figure for many in the longevity movement, which has surged in recent years. When I interview a scientist, doctor, or investor in the field, I always like to ask about personal goals. I’ve heard all sorts. Some have told me they want an extra decade of healthy life. Many want to get to 120, close to the current known limit of human age. Others have told me they want to stick around until they’re 200. And some have told me they don’t want to put a number on it; they just want to live for as long as they possibly can—potentially indefinitely.

How far can they go? This is a good time to ask the question. The longevity scene is having a moment, thanks to a combination of scientific advances, public interest, and an unprecedented level of investment. A few key areas of research suggest that we might be able to push human life spans further, and potentially reverse at least some signs of aging.

Take, for example, the concept of cellular reprogramming. Nobel Prize–winning research has shown it is possible to return adult cells to a “younger” state more like that of a stem cell. Billions of dollars have been poured into trying to transform this discovery into a therapy that could wind back the age of a person’s cells and tissues, potentially restoring some elements of youth.

Many other avenues are being explored, including a diabetes drug that could have broad health benefits; drugs based on a potential anti-aging compound discovered in the soil of Rapa Nui (Easter Island); attempts to rejuvenate the immune system; gene therapies designed to boost muscle or extend the number of times our cells can divide; and many, many more. Other researchers are pursuing ways to clear out the aged, worn-out cells in our bodies. These senescent cells appear to pump out chemicals that harm the surrounding tissues. Around eight years ago, scientists found that mice cleared of senescent cells lived 25% longer than untreated ones. They also had healthier hearts and took much longer to develop age-related diseases like cancer and cataracts. They even looked younger.

Unfortunately, human trials of senolytics—drugs that target senescent cells—haven’t been quite as successful. Unity Biotechnology, a company cofounded by leading researchers in the field, tested such a drug in people with osteoarthritis. In 2020, the company officially abandoned that drug after it was found to be no better than a placebo in treating the condition.

That doesn’t mean we won’t one day figure out how to treat age-related diseases, or even aging itself, by targeting senescent cells. But it does illustrate how complicated the biology of aging is. Researchers can’t even agree on what the exact mechanisms of aging are and which they should be targeting. Debates continue to rage over how long it’s possible for humans to live—and whether there is a limit at all.

Still, we are getting better at testing potential therapies in more humanlike models. We’re finding new and improved ways to measure the aging process itself. The X Prize is offering $101 million to researchers who find a way to restore at least 10 years of “muscle, cognitive, and immune function” in 65- to 80-year-olds with a treatment that takes one year or less to administer. Given that the competition runs for seven years, it’s a tall order; Jamie Justice, executive director of the X Prize’s health-span domain, told me she initially fought back on the challenging goal and told the organization’s founder, Peter Diamandis, there was “no way” researchers could achieve it. But we’ve seen stranger things in science. 

Some people are banking on this kind of progress. Not just the billionaires who have already spent millions of dollars and a significant chunk of their time on strategies that might help them defy aging, but also the people who have opted for cryopreservation. There are hundreds of bodies in storage—bodies of people who believed they might one day be reanimated. For them, the hopes are slim. I asked Justice whether she thought they stood a chance at a second life. “Honest answer?” she said. “No.”

It looks likely that something will be developed in the coming decades that will help us live longer, in better health. Not an elixir for eternal life, but perhaps something—or a few somethings—that can help us stave off some of the age-related diseases that tend to kill a lot of us. Such therapies may well push life expectancy up. I don’t feel we need a massive increase, but perhaps I’ll feel differently when I’m approaching 88.

The ONS website gives me a one in four chance of making it to 96, and a one in 10 chance of seeing my 100th birthday. To me, that sounds like an impressive number—as long as I get there in semi-decent health.

I’d still be a long way from the current record of 122 years. But it might just be that there are some limitations we must simply come to terms with—as individuals and in society at large. In a 2017 paper making the case for a limit to the human life span, scientists Jan Vijg and Eric Le Bourg wrote something that has stuck with me—and is worth bearing in mind when considering the future of human longevity: “A species does not need to live for eternity to thrive.” 

AI and the future of sex

The power of pornography doesn’t lie in arousal but in questions. What is obscene? What is ethical or safe to watch? 

We don’t have to consume or even support it, but porn will still demand answers. The question now is: What is “real” porn? 

Anti-porn crusades have been at the heart of the US culture wars for generations, but by the start of the 2000s, the issue had lost its hold. Smartphones made porn too easy to spread and hard to muzzle. Porn became a politically sticky issue, too entangled with free speech and evolving tech. An uneasy truce was made: As long as the imagery was created by consenting adults and stayed on the other side of paywalls and age verification systems, it was to be left alone. 

But today, as AI porn infiltrates dinner tables, PTA meetings, and courtrooms, that truce may not endure much longer. The issue is already making its way back into the national discourse; Project 2025, the Heritage Foundation–backed policy plan for a future Republican administration, proposes the criminalization of porn and the arrest of its creators.

But what if porn is wholly created by an algorithm? In that case, whether it’s obscene, ethical, or safe becomes secondary to What does it mean for porn to be “real”—and what will the answer demand from all of us? 

During my time as a filmmaker in adult entertainment, I witnessed seismic shifts: the evolution from tape to digital, the introduction of new HIV preventions, and the disruption of the industry by free streaming and social media. An early tech adopter, porn was an industry built on desires, greed, and fantasy, propped up by performances and pharmaceuticals. Its methods and media varied widely, but the one constant was its messy humanity. Until now.

What does it mean for porn to be “real”—and what will the answer demand from all of us?

When AI-generated pornography first emerged, it was easy to keep a forensic distance from the early images and dismiss them as a parlor trick. They were laughable and creepy: cheerleaders with seven fingers and dead, wonky eyes. Then, seemingly overnight, they reached uncanny photorealism. Synthetic erotica, like hentai and CGI, has existed for decades, but I had never seen porn like this. These were the hallucinations of a machine trained on a million pornographic images, both the creation of porn and a distillation of it. Femmes fatales with psychedelic genitalia, straight male celebrities in same-sex scenes, naked girls in crowded grocery stores—posted not in the dark corners of the internet but on social media. The images were glistening and warm, raising fresh questions about consent and privacy. What would these new images turn us into?

In September of 2023, the small Spanish town of Almendralejo was forced to confront this question. Twenty girls returned from summer break to find naked selfies they’d never taken being passed around at school. Boys had rendered the images using an AI “nudify” app with just a few euros and a yearbook photo. The girls were bullied and blackmailed, suffered panic attacks and depression. The youngest was 11. The school and parents were at a loss. The tools had arrived faster than the speed of conversation, and they did not discriminate. By the end of the school year, similar cases had spread to Australia, Quebec, London, and Mexico. Then explicit AI images of Taylor Swift flooded social media. If she couldn’t stop this, a 15-year-old from Michigan stood no chance.

The technology behind pornography never slows down, regardless of controversies. When students return to school this fall, it will be in the shadow of AI video engines like Sora and Runway 3, which produce realistic video from text prompts and photographs. If still images have caused so much global havoc, imagine what video could do and where the footage could end up. 

As porn becomes more personal, it’s also becoming more personalized. Users can now check boxes on a list of options as long as the Cheesecake Factory menu to create their ideal scenes: categories like male, female, and trans; ages from 18 to 90; breast and penis size; details like tan lines and underwear color; backdrops like grocery stores, churches, the Eiffel Tower, and Stonehenge; even weather, like tornadoes. It may be 1s and 0s, but AI holds no binary; it holds no judgment or beauty standards. It can render seldom-represented bodies, like those of mature, transgender, and disabled people, in all pairings. Hyper-customizable porn will no longer require performers—only selections and an answer to the question “What is it that I really like?” While Hollywood grapples with the ethics of AI, artificial porn films will become a reality. Celebrities may boost their careers by promoting their synthetic sex tapes on late-night shows.

The progress of AI porn may shift our memories, too. AI is already used to extend home movies and turn vintage photos into live-action scenes. What happens when we apply this to sex? Early sexual images etch themselves on us: glimpses of flesh from our first crush, a lost lover, a stranger on the bus. These erotic memories depend on the specific details for their power: a trail of hair, panties in a specific color, sunlight on wet lips, my PE teacher’s red gym shorts. They are ideal for AI prompts. 

Porn and real-life sex affect each other in a loop. If people become accustomed to getting exactly what they want from erotic media, this could further affect their expectations of relationships. A first date may have another layer of awkwardness if each party has already seen an idealized, naked digital doppelganger of the other. 

Despite (or because of) this blurring of lines, we may actually start to see a genre of “ethical porn.” Without the need for sets, shoots, or even performers, future porn studios might not deal with humans at all. This may be appealing for some viewers, who can be sure that new actors are not underage, trafficked, or under the influence.

A synergy has been brewing since the ’90s, when CD-ROM games, life-size silicone dolls, and websites introduced “interactivity” to adult entertainment. Thirty years later, AI chatbot “partners” and cheaper, lifelike sex dolls are more accessible than ever. Porn tends to merge all available tech toward complete erotic immersion. The realism of AI models has already broken the dam to the uncanny valley. Soon, these avatars will be powered by chatbots and embodied in three-dimensional prosthetics, all existing in virtual-reality worlds. What follows will be the fabled sex robot. 

So what happens when we’ve removed the “messy humanity” from sex itself? Porn is defined by the needs of its era. Ours has been marked by increasing isolation. The pandemic further conditioned us to digitize our most intimate moments, bringing us FaceTime hospital visits and weddings, and caused a deep discharge of our social batteries. Adult entertainment may step into that void. The rise of AI-generated porn may be a symptom of a new synthetic sexuality, not the cause. In the near future, we may find this porn arousing because of its artificiality, not in spite of it.

Leo Herrera is a writer and artist. He explores how tech intersects with sex and culture on Substack at Herrera Words.

Move over, text: Video is the new medium of our lives

The other day I idly opened TikTok to find a video of a young woman refinishing an old hollow-bodied electric guitar.

It was a montage of close-up shots—looking over her shoulder as she sanded and scraped the wood, peeled away the frets, expertly patched the cracks with filler, and then spray-painted it a radiant purple. She compressed days of work into a tight 30-second clip. It was mesmerizing.

Of course, that wasn’t the only video I saw that day. In barely another five minutes of swiping around, I saw a historian discussing the songs Tolkien wrote in The Lord of the Rings; a sailor puzzling over a capsized boat he’d found deep at sea; a tearful mother talking about parenting a child with ADHD; a Latino man laconically describing a dustup with his racist neighbor; and a linguist discussing how Gen Z uses video-game metaphors in everyday life.

I could go on. I will! And so, probably, will you. This is what the internet looks like now. It used to be a preserve of text and photos—but increasingly, it is a forest of video.

This is one of the most profound technology shifts that will define our future: We are entering the age of the moving image.

For centuries, when everyday people had to communicate at a distance, they really had only two options. They could write something down; they could send a picture. The moving image was too expensive to shoot, edit, and disseminate. Only pros could wield it.

The smartphone, the internet, and social networks like TikTok have rapidly and utterly transformed this situation. It’s now common, when someone wants to hurl an idea into the world, not to pull out a keyboard and type but to turn on a camera and talk. For many young people, video might be the prime way to express ideas.

As media thinkers like Marshall McLuhan have intoned, a new medium changes us. It changes the way we learn, the way we think—and what we think about. When mass printing emerged, it helped create a culture of news, mass literacy, and bureaucracy, and—some argue—the very idea of scientific evidence. So how will mass video shift our culture?

For starters, I’d argue, it is helping us share knowledge that used to be damnably hard to capture in text. I’m a long-distance cyclist, for example, and if I need to fix my bike, I don’t bother reading a guide. I look for a video explainer. If you’re looking to express—or absorb—knowledge that’s visual, physical, or proprioceptive, the moving image nearly always wins. Athletes don’t read a textual description of what they did wrong in the last game; they watch the clips. Hence the wild popularity, on video platforms, of instructional video—makeup tutorials, cooking demonstrations. (Or even learn-to-code material: I learned Python by watching coders do it.)

Video also is no longer about mere broadcast, but about conversation—it’s a way to respond to others, notes Raven Maragh-Lloyd, the author of Black Networked Resistance and a professor of film and media studies at Washington University. “We’re seeing a rise of audience participation,” she notes, including people doing “duets” on TikTok or response videos on YouTube. Everyday creators see video platforms as ways to talk back to power.

“My students were like, ‘If there’s a video over seven seconds, we’re not watching it.’”

Brianna Wiens, Waterloo University

There’s also an increasingly sophisticated lexicon of visual styles. Today’s video creators riff on older film aesthetics to make their points. Brianna Wiens, an assistant professor of digital media and rhetoric at Waterloo University, says she admired how a neuroscientist used stop-motion video, a technique from the early days of film, to produce TikTok discussions of vaccines during the height of the covid-19 pandemic. Or consider the animated GIF, which channels the “zoetrope” of the 1800s, looping a short moment in time to examine over and over.

Indeed, as video becomes more woven into the vernacular of daily life, it’s both expanding and contracting in size. There are streams on Twitch where you can watch someone for hours—and viral videos where someone compresses an idea into mere seconds. Those latter ones have a particular rhetorical power because they’re so ingestible. “I was teaching a class called Digital Lives, and my students were like, If there’s a video over seven seconds, we’re not watching it,” Wiens says, laughing.

Are there dangers ahead as use of the moving image grows? Possibly. Maybe it will too powerfully reward people with the right visual and physical charisma. (Not necessarily a novel danger: Text and radio had their own versions.) More subtly, video is technologically still adolescent. It’s not yet easy to search, or to clip and paste and annotate and collate—to use video for quietly organizing our thoughts, the way we do with text. Until those tool sets emerge (and you can see that beginning), its power will be limited. Lastly, maybe the moving image will become so common and go-to that’ll kill off print culture.

Media scholars are not terribly stressed about this final danger. New forms of media rarely kill off older ones. Indeed, as the late priest and scholar Walter Ong pointed out, creating television and radio requires writing plenty of text—all those scripts. Today’s moving-media culture is possibly even more saturated with writing. Videos on Instagram and TikTok often include artfully arranged captions, “diegetic” text commenting on the action, or data visualizations. You read while you watch; write while you shoot.

“We’re getting into all kinds of interesting hybrids and relationships,” notes Lev Manovich, a professor at the City University of New York. The tool sets for sculpting and editing video will undoubtedly improve too, perhaps using AI to help auto-edit, redact, summarize. 

One firm, Reduct, already offers a clever trick: You alter a video by editing the transcript. Snip out a sentence, and it snips out the related visuals. Public defenders use it to parse and edit police videos. They’re often knee-deep in the stuff—the advent of body cameras worn by officers has produced an ocean of footage, as Reduct’s CEO, Robert Ochshorn, tells me. 

Meanwhile, generative AI will make it easier to create a film out of pure imagination. This means, of course, that we’ll see a new flood of visual misinformation. We’ll need to develop a sharper culture of finding the useful amid the garbage. It took print a couple of centuries to do that, as scholars of the book will tell you—centuries during which the printing press helped spark untold war and upheaval. We’ll be living through the same process with the moving image.

So strap yourselves in. Whatever else happens, it’ll be interesting. 

Clive Thompson is the author of Coders: The Making of a New Tribe and the Remaking of the World.

Beyond gene-edited babies: the possible paths for tinkering with human evolution

In 2016, I attended a large meeting of journalists in Washington, DC. The keynote speaker was Jennifer Doudna, who just a few years before had co-invented CRISPR, a revolutionary method of changing genes that was sweeping across biology labs because it was so easy to use. With its discovery, Doudna explained, humanity had achieved the ability to change its own fundamental molecular nature. And that capability came with both possibility and danger. One of her biggest fears, she said, was “waking up one morning and reading about the first CRISPR baby”—a child with deliberately altered genes baked in from the start.  

As a journalist specializing in genetic engineering—the weirder the better—I had a different fear. A CRISPR baby would be a story of the century, and I worried some other journalist would get the scoop. Gene editing had become the biggest subject on the biotech beat, and once a team in China had altered the DNA of a monkey to introduce customized mutations, it seemed obvious that further envelope-pushing wasn’t far off. 

If anyone did create an edited baby, it would raise moral and ethical issues, among the profoundest of which, Doudna had told me, was that doing so would be “changing human evolution.” Any gene alterations made to an embryo that successfully developed into a baby would get passed on to any children of its own, via what’s known as the germline. What kind of scientist would be bold enough to try that? 

Two years and nearly 8,000 miles in an airplane seat later, I found the answer. At a hotel in Guangzhou, China, I joined a documentary film crew for a meeting with a biophysicist named He Jiankui, who appeared with a retinue of advisors. During the meeting, He was immensely gregarious and spoke excitedly about his research on embryos of mice, monkeys, and humans, and about his eventual plans to improve human health by adding beneficial genes to people’s bodies from birth. Still imagining that such a step must lie at least some way off, I asked if the technology was truly ready for such an undertaking. 

“Ready,” He said. Then, after a laden pause: “Almost ready.”

Why wait 100,000 years for natural selection to do its job? For a few hundred dollars in chemicals, you could try to install these changes in an embryo in 10 minutes.

Four weeks later, I learned that he’d already done it, when I found data that He had placed online describing the genetic profiles of two gene-edited human fetuses—that is, ”CRISPR babies” in gestation—as well an explanation of his plan, which was to create humans immune to HIV. He had targeted a gene called CCR5, which in some people has a variation known to protect against HIV infection. It’s rare for numbers in a spreadsheet to make the hair on your arms stand up, although maybe some climatologists feel the same way seeing the latest Arctic temperatures. It appeared that something historic—and frightening—had already happened. In our story breaking the news that same day, I ventured that the birth of genetically tailored humans would be something between a medical breakthrough and the start of a slippery slope of human enhancement. 

For his actions, He was later sentenced to three years in prison, and his scientific practices were roundly excoriated. The edits he made, on what proved to be twin girls (and a third baby, revealed later), had in fact been carelessly imposed, almost in an out-of-control fashion, according to his own data. And I was among a flock of critics—in the media and academia—who would subject He and his circle of advisors to Promethean-level torment via a daily stream of articles and exposés. Just this spring, Fyodor Urnov, a gene-editing specialist at the University of California, Berkeley, lashed out on X, calling He a scientific “pyromaniac” and comparing him to a Balrog, a demon from J.R.R. Tolkien’s The Lord of the Rings. It could seem as if He’s crime wasn’t just medical wrongdoing but daring to take the wheel of the very processes that brought you, me, and him into being. 

Futurists who write about the destiny of humankind have imagined all sorts of changes. We’ll all be given auxiliary chromosomes loaded with genetic goodies, or maybe we’ll march through life as a member of a pod of identical clones. Perhaps sex will become outdated as we reproduce exclusively through our stem cells. Or human colonists on another planet will be isolated so long that they become their own species. The thing about He’s idea, though, is that he drew it from scientific realities close at hand. Just as some gene mutations cause awful, rare diseases, others are being discovered that lend a few people the ability to resist common ones, like diabetes, heart disease, Alzheimer’s—and HIV. Such beneficial, superpower-like traits might spread to the rest of humanity, given enough time. But why wait 100,000 years for natural selection to do its job? For a few hundred dollars in chemicals, you could try to install these changes in an embryo in 10 minutes. That is, in theory, the easiest way to go about making such changes—it’s just one cell to start with. 

Editing human embryos is restricted in much of the world—and making an edited baby is flatly illegal in most countries surveyed by legal scholars. But advancing technology could render the embryo issue moot. New ways of adding CRISPR to the bodies of people already born—children and adults—could let them easily receive changes as well. Indeed, if you are curious what the human genome could look like in 125 years, it’s possible that many people will be the beneficiaries of multiple rare, but useful, gene mutations currently found in only small segments of the population. These could protect us against common diseases and infections, but eventually they could also yield frank improvements in other traits, such as height, metabolism, or even cognition. These changes would not be passed on genetically to people’s offspring, but if they were widely distributed, they too would become a form of human-directed self-evolution—easily as big a deal as the emergence of computer intelligence or the engineering of the physical world around us.

I was surprised to learn that even as He’s critics take issue with his methods, they see the basic stratagem as inevitable. When I asked Urnov, who helped coin the term “genome editing” in 2005, what the human genome could be like in, say, a century, he readily agreed that improvements using superpower genes will probably be widely introduced into adults—and embryos—as the technology to do so improves. But he warned that he doesn’t necessarily trust humanity to do things the right way. Some groups will probably obtain the health benefits before others. And commercial interests could eventually take the trend in unhelpful directions—much as algorithms keep his students’ noses pasted, unnaturally, to the screens of their mobile phones. “I would say my enthusiasm for what the human genome is going to be in 100 years is tempered by our history of a lack of moderation and wisdom,” he said. “You don’t need to be Aldous Huxley to start writing dystopias.”

Editing early

At around 10 p.m. Beijing time, He’s face flicked into view over the Tencent videoconferencing app. It was May 2024, nearly six years after I had first interviewed him, and he appeared in a loftlike space with a soaring ceiling and a wide-screen TV on a wall. Urnov had warned me not to speak with He, since it would be like asking “Bernie Madoff to opine about ethical investing.” But I wanted to speak to him, because he’s still one of the few scientists willing to promote the idea of broad improvements to humanity’s genes. 

Of course, it’s his fault everyone is so down on the idea. After his experiment, China formally made “implantation” of gene-edited human embryos into the uterus a crime. Funding sources evaporated. “He created this blowback, and it brought to a halt many people’s research. And there were not many to begin with,” says Paula Amato, a fertility doctor at Oregon Health and Science University who co-leads one of only two US teams that have ever reported editing human embryos in a lab.  “And the publicity—nobody wants to be associated with something that is considered scandalous or eugenic.”

After leaving prison in 2022, the Chinese biophysicist surprised nearly everyone by seeking to make a scientific comeback. At first, he floated ideas for DNA-based data storage and “affordable” cures for children who have muscular dystrophy. But then, in summer 2023, he posted to social media that he intended to return to research on how to change embryos with gene editing, with the caveat that “no human embryo will be implanted for pregnancy.” His new interest was a gene called APP, or amyloid precursor protein. It’s known that people who possess a very rare version, or “allele,” of this gene almost never develop Alzheimer’s disease

In our video call, He said the APP gene is the main focus of his research now and that he is determining how to change it. The work, he says, is not being conducted on human embryos, but rather on mice and on kidney cells, using an updated form of CRISPR called base editing, which can flip individual letters of DNA without breaking the molecule. 

“We just want to expand the protective allele from small amounts of lucky people to maybe most people,” He told me. And if you made the adjustment at the moment an egg is fertilized, you would only have to change one cell in order for the change to take hold in the embryo and, eventually, everywhere in a person’s brain. Trying to edit an individual’s brain after birth “is as hard a delivering a person to the moon,” He said. “But if you deliver gene editing to an embryo, it’s as easy as driving home.” 

In the future, He said, human embryos will “obviously” be corrected for all severe genetic diseases. But they will also receive “a panel” of “perhaps 20 or 30” edits to improve health. (If you’ve seen the sci-fi film Gattaca, it takes place in a world where such touch-ups are routine—leading to stigmatization of the movie’s hero, a would-be space pilot who lacks them.) One of these would be to install the APP variant, which involves changing a single letter of DNA. Others would protect against diabetes, and maybe cancer and heart disease. He calls these proposed edits “genetic vaccines” and believes people in the future “won’t have to worry” about many of the things most likely to kill them today.  

Is He the person who will bring about this future? Last year, in what seemed to be a step toward his rehabilitation, he got a job heading a gene center at Wuchang University of Technology, a third-tier institution in Wuhan. But He said during our call that he had already left the position. He didn’t say what had caused the split but mentioned that a flurry of press coverage had “made people feel pressured.” One item, in a French financial paper, Les Echos, was titled “GMO babies: The secrets of a Chinese Frankenstein.” Now he carries out research at his own private lab, he says, with funding from Chinese and American supporters. He has early plans for a startup company. Could he tell me names and locations? “Of course not,” he said with a chuckle. 

little girl holding a snake
MICHAEL BYERS

It could be there is no lab, just a concept. But it’s a concept that is hard to dismiss. Would you give your child a gene tweak—a swap of a single genetic letter among the 3 billion that run the length of the genome—to prevent Alzheimer’s, the mind thief that’s the seventh-leading cause of death in the US? Polls find that the American public is about evenly split on the ethics of adding disease resistance traits to embryos. A sizable minority, though, would go further. A 2023 survey published in Science found that nearly 30% of people would edit an embryo if it enhanced the resulting child’s chance of attending a top-ranked college. 

The benefits of the genetic variant He claims to be working with were discovered by the Icelandic gene-hunting company deCode Genetics. Twenty-six years ago, in 1998, its founder, a doctor named Kári Stefánsson, got the green light to obtain medical records and DNA from Iceland’s citizens, allowing deCode to amass one of the first large national gene databases. Several similar large biobanks now operate, including one in the United Kingdom, which recently finished sequencing the genomes of 500,000 volunteers. These biobanks make it possible to do computerized searches to find relationships between people’s genetic makeup and real-life differences like how long they live, what diseases they get, and even how much beer they drink. The result is a statistical index of how strongly every possible difference in human DNA affects every trait that can be measured. 

In 2012, deCode’s geneticists used the technique to study a tiny change in the APP gene and determined that the individuals who had it rarely developed Alzheimer’s. They otherwise seemed healthy. In fact, they seemed particularly sharp in old age and appeared to live longer, too. Lab tests confirmed that the change reduces the production of brain plaques, the abnormal clumps of protein that are a hallmark of the disease. 

“This is beginning to be about the essence of who we are as a species.”

Kári Stefánsson, founder and CEO, deCode genetics

One way evolution works is when a small change or error appears in one baby’s DNA. If the change helps that person survive and reproduce, it will tend to become more common in the species—eventually, over many generations, even universal. This process is slow, but it’s visible to science. In 2018, for example, researchers determined that the Bajau, a group indigenous to Indonesia whose members collect food by diving, possess genetic changes associated with bigger spleens. This allows them to store more oxygenated red blood cells—an advantage in their lives. 

Even though the variation in the APP gene seems hugely beneficial, it’s a change that benefits old people, way past their reproductive years. So it’s not the kind of advantage natural selection can readily act on. But we could act on it. That is what technology-assisted evolution would look like—seizing on a variation we think is useful and spreading it. “The way, probably, that enhancement will be done will be to look at the population, look at people who have enhanced capabilities—whatever those might be,” the Israeli medical geneticist Ephrat Levy-Lahad said during a gene-editing summit last year. “You are going to be using variations that already exist in the population that you already have information on.”

One advantage of zeroing in on advantageous DNA changes that already exist in the population is that their effects are pretested. The people located by deCode were in their 80s and 90s. There didn’t seem to be anything different about them—except their unusually clear minds. Their lives—as seen from the computer screens of deCode’s biobank—served as a kind of long-term natural experiment. Yet scientists could not be fully confident placing this variant into an embryo, since the benefits or downsides might differ depending on what other genetic factors are already present, especially other Alzheimer’s risk genes. And it would be difficult to run a study to see what happens. In the case of APP, it would take 70 years for the final evidence to emerge. By that time, the scientists involved would all be dead. 

When I spoke with Stefánsson last year, he made the case both for and against altering genomes with “rare variants of large effect,” like the change in APP. “All of us would like to keep our marbles until we die. There is no question about it. And if you could, by pushing a button, install the kind of protection people with this mutation have, that would be desirable,” he said. But even if the technology to make this edit before birth exists, he says, the risks of doing so seem almost impossible to gauge: “You are not just affecting the person, but all their descendants forever. These are mutations that would allow for further selection and further evolution, so this is beginning to be about the essence of who we are as a species.”

Editing everyone

Some genetic engineers believe that editing embryos, though in theory easy to do, will always be held back by these grave uncertainties. Instead, they say, DNA editing in living adults could become easy enough to be used not only to correct rare diseases but to add enhanced capabilities to those who seek them. If that happens, editing for improvement could spread just as quickly as any consumer technology or medical fad. “I don’t think it’s going to be germline,” says George Church, a Harvard geneticist often sought out for his prognostications. “The 8 billion of us who are alive kind of constitute the marketplace.” For several years, Church has been circulating what he calls “my famous, or infamous, table of enhancements.” It’s a tally of gene variants that lend people superpowers, including APP and another that leads to extra-hard bones, which was found in a family that complained of not being able to stay afloat in swimming pools. The table is infamous because some believe Church’s inclusion of the HIV-protective CCR5 variant inspired He’s effort to edit it into the CRISPR babies.

Church believes novel gene treatments for very serious diseases, once proven, will start leading the way toward enhancements and improvements to people already born. “You’d constantly be tweaking and getting feedback,” he says—something that’s hard to do with the germline, since humans take so long to grow up. Changes to adult bodies would not be passed down, but Church thinks they could easily count as a form of heredity. He notes that railroads, eyeglasses, cell phones—and the knowledge of how to make and use all these technologies—are already all transmitted between generations. “We’re clearly inheriting even things that are inorganic,” he says. 

The biotechnology industry is already finding ways to emulate the effects of rare, beneficial variants. A new category of heart drugs, for instance, mimics the effect of a rare variation in a gene, called PCSK9, that helps maintain cholesterol levels. The variation, initially discovered in a few people in the US and Zimbabwe, blocks the gene’s activity and gives them ultra-low cholesterol levels for life. The drugs, taken every few weeks or months, work by blocking the PCSK9 protein. One biotech company, though, has started trying to edit the DNA of people’s liver cells (the site of cholesterol metabolism) to introduce the same effect permanently. 

For now, gene editing of adult bodies is still challenging and is held back by the difficulty of “delivering” the CRISPR instructions to thousands, or even billions of cells—often using viruses to carry the payloads. Organs like the brain and muscles are hard to access, and the treatments can be ordeals. Fatalities in studies aren’t unheard-of. But biotech companies are pouring dollars into new, sleeker ways to deliver CRISPR to hard-to-reach places. Some are designing special viruses that can home in on specific types of cells. Others are adopting nanoparticles similar to those used in the covid-19 vaccines, with the idea of introducing editors easily, and cheaply, via a shot in the arm. 

At the Innovative Genomics Institute, a center established by Doudna in Berkeley, California, researchers anticipate that as delivery improves, they will be able to create a kind of CRISPR conveyor belt that, with a few clicks of a mouse, allows doctors to design gene-editing treatments for any serious inherited condition that afflicts children, including immune deficiencies so uncommon that no company will take them on. “This is the trend in my field. We can capitalize on human genetics quite quickly, and the scope of the editable human will rapidly expand,” says Urnov, who works at the institute. “We know that already, today—and forget 2124, this is in 2024—we can build enough CRISPR for the entire planet. I really, really think that [this idea of] gene editing in a syringe will grow. And as it does, we’re going to start to face very clearly the question of how we equitably distribute these resources.” 

For now, gene-editing interventions are so complex and costly that only people in wealthy countries are receiving them. The first such therapy to get FDA approval, a treatment for sickle-cell disease, is priced at over $2 million and requires a lengthy hospital stay. Because it’s so difficult to administer, it’s not yet being offered in most of Africa, even though that is where sickle-cell disease is most common. Such disparities are now propelling efforts to greatly simplify gene editing, including a project jointly paid for by the Gates Foundation and the National Institutes of Health that aims to design “shot in the arm” CRISPR, potentially making cures scalable and “accessible to all.” A gene editor built along the lines of the covid-19 vaccine might cost only $1,000. The Gates Foundation sees the technology as a way to widely cure both sickle-cell and HIV—an “unmet need” in Africa, it says. To do that, the foundation is considering introducing into people’s bone marrow the exact HIV-defeating genetic change that He tried to install in embryos. 

Then there’s the risk that gene terrorists, or governments, could change people’s DNA without their permission or knowledge.

Scientists can foresee great benefits ahead—even a “final frontier of molecular liberty,” as Christopher Mason, a “space geneticist” at Weill Cornell Medicine in New York, characterizes it. Mason works with newer types of gene editors that can turn genes on or off temporarily. He is using these in his lab to make cells resistant to radiation damage. The technology could be helpful to astronauts or, he says, for a weekend of “recreational genomics”—say, boosting your repair genes in preparation to visit the site of the Chernobyl power plant. The technique is “getting to be, I actually think it is, a euphoric application of genetic technologies,” says Mason. “We can say, hey, find a spot on the genome and flip a light switch on or off on any given gene to control its expression at a whim.”  

Easy delivery of gene editors to adult bodies could give rise to policy questions just as urgent as the ones raised by the CRISPR babies. Whether we encourage genetic enhancement—in particular, free-market genome upgrades—is one of them. Several online health influencers have already been touting an unsanctioned gene therapy, offered in Honduras, that its creators claim increases muscle mass. Another risk: If changing people’s DNA gets easy enough, gene terrorists or governments could do it without their permission or knowledge. One genetic treatment for a skin disease, approved in the US last year, is formulated as a cream—the first rub-on gene therapy (though not a gene editor). 

Some scientists believe new delivery tools should be kept purposefully complex and cumbersome, so that only experts can use them—a biological version of “security through obscurity.” But that’s not likely to happen. “Building a gene editor to make these changes is no longer, you know, the kind of technology that’s in the realm of 100 people who can do it. This is out there,” says Urnov. “And as delivery improves, I don’t know how we will be able to regulate that.”

man sitting and reading with man behind him
MICHAEL BYERS

In our conversation, Urnov frequently returned to that list of superpowers—genetic variants that make some people outliers in one way or another. There is a mutation that allows people to get by on five hours of sleep a night, with no ill effects. There is a woman in Scotland whose genetic peculiarity means she feels no pain and is perpetually happy, though also forgetful. Then there is Eero Mäntyranta, the cross-country ski champion who won three medals at the 1964 Winter Olympics and who turned out to have an inordinate number of red blood cells thanks to an alteration in a gene called the EPO receptor. It’s basically a blueprint for anyone seeking to join the Enhanced Games, the libertarian plan for a pro-doping international sports competition that critics call “borderline criminal” but which has the backing of billionaire Peter Thiel, among others. 

All these are possibilities for the future of the human genome, and we won’t even necessarily need to change embryos to get there. Some researchers even expect that with some yet-to-be-conceived technology, updating a person’s DNA could become as simple as sending a document via Wi-Fi, with today’s viruses or nanoparticles becoming anachronisms like floppy disks. I asked Church for his prediction about where gene-editing technology is going in the long term. “Eventually you’d get shot up with a whole bunch of things when you’re born, or it could even be introduced during pregnancy,” he said. “You’d have all the advantages without the disadvantages of being stuck with heritable changes.” 

And that will be evolution too.

This rare earth metal shows us the future of our planet’s resources

Leaving aside meteorites that strike Earth’s surface and spacecraft that get flung out of its orbit, the quantity of materials available on this planet isn’t really changing all that much.

That simple fact of our finite resources becomes clearer and more daunting as the pace of technological change advances and our society requires an ever wider array of material inputs to sustain it. So for nearly as long as we’ve systematically extracted these substances, we’ve been trying to predict how long they will be able to meet our demand. How much can we pump from a well, or wrest from a mine, before we need to reconsider what we’re building and how? 

Those predictions have grown increasingly complicated. And now it’s also a matter of how much we can pull from manufactured and discarded objects. Can we recycle parts of that iPhone, or the guts of that massive wind turbine? How much of any given object can we recirculate into our churning technological economy? 

Estimates of how much material we’ll have access to in the future tend to have a tricky, often implicit assumption at their center: that we’ll be making roughly the same products with the same materials as today. But technology moves quickly, and by the time we understand what we might need next, or develop a specialized system to mine or recycle it, the next generation of tech might render all our assumptions obsolete. 

We’re in the middle of a potentially transformative moment. The materials we need to power our world are beginning to shift from fossil fuels to energy sources that don’t produce the greenhouse-gas emissions changing our climate. Metals discovered barely more than a century ago now underpin the technologies we’re relying on for cleaner energy, and not having enough of them could slow progress. 

Take neodymium, one of the rare earth metals. While far from a household name, it’s a metal that humans have relied on for generations. Since the early 20th century, neodymium has been used to give decorative glass a purplish hue. Today, it’s used in cryogenic coolers to reach ultra-low temperatures needed for devices like superconductors and in high-powered magnets that power everything from smartphones to wind turbines. 

Demand for neodymium-based magnets could outstrip supply in the coming decade. The longer-term prospects for the metal’s supply aren’t as dire, but a careful look at neodymium’s potential future reveals many of the challenges we’ll likely face across the supply chain for materials in the coming century and beyond. 

Peak panic

Before we get into our material future, it’s important to point out just how hard it’s always been to make accurate predictions of this kind. Just look at our continuous theorizing about the supply of fossil fuels. 

One version of the story, told frequently in economics classes, goes something like this: Given that there’s a limited supply of oil, at some point the world will run out of it. Before then, we should reach some maximum amount of oil extraction, and then production will start an irreversible decline. That high point is known as “peak oil.”

This idea has been traced back as far as the early 1900s, but one of the most famous analyses came from M. King Hubbert, who was a geologist at Shell. In a 1956 paper, Hubbert considered the total amount of oil (and other fossil fuels, like coal and natural gas) that geologists had identified on the planet. From the estimated supply and the amount the world had burned through, he predicted that oil production in the US would peak and begin declining between 1965 and 1970. The peak of world oil production, he predicted, would come a bit later, in 2000. 

For a while, it looked as if Hubbert was right. US oil production increased until 1970, when it reached a dramatic peak. It then declined for decades afterward, until about 2010. But then advances in drilling and fracking techniques unlocked hard-to-reach reserves. Oil production skyrocketed in the US through the 2010s, and as of 2023, the country was producing more oil than ever before

Peak-oil panic has long outlived Hubbert, but every time economists and geologists have predicted that we’ve reached, or are about to reach, the peak of oil production, they’ve missed the mark (so far).

Now there’s a new reason we might see fossil-fuel production actually peak and eventually fall off: the energy transition. That’s shorthand for the grand effort to shift away from energy sources that produce greenhouse gases and toward renewables and other low-carbon options. 

Hubbert’s theory suggested that a fixed supply would force production to decline from a peak. But as the world wakes up to the dangers of climate change, and as low-carbon energy sources like wind, solar, and nuclear take off, we may wind up leaving some coal, oil, and natural gas in the ground. Simply put, production might head back down because of a lack of demand, not a lack of supply. 

Those newly ascendant energy sources, though, are ironically a new source of “peak” panic. Solar panels, wind turbines, and batteries may not require fuel, but they do require a host of metals, including lithium, copper, steel, and rare earths like neodymium. 

neodymium metal ore
Neodymium is crucial for powering many of our devices. And we could be facing a supply crunch.
GETTY IMAGES

If we extract, process, use, and discard these metals, conceptually there must be some point in the future when we run out of them. And as the energy transition has gotten underway, plenty of forecasts have attempted to understand which metals we should worry about and when they might start to be depleted. But experts say that understanding the availability of resources in this sector is much more complicated than picking out a single future peak. 

“The peak modeling thing is something that doesn’t really apply to metals,” says Simon Jowitt, director of the Center for Research in Economic Geology at the University of Nevada, Reno. It’s nearly impossible to understand whether we’ve reached a peak in production for any given material, or even whether those peaks can be predicted, as Jowitt said in a 2020 paper. 

Let’s take a closer look at neodymium. Reserves of the metal—the amount we know about that’s economically feasible to extract—have been estimated at 12.8 million tons. To keep the world from warming more than 1.5 °C over preindustrial levels, we might need as much as 121,000 tons every year just for wind turbines, according to a 2023 study on the material demands of the energy transition. Depending on how much material we assume makes it from the mine into final products, we could burn through those reserves in roughly a century.

If we extract, process, use, and discard these metals, conceptually there must be some point in the future when we run out of them.

The problem with this thinking, though, is that reserves and resources are far from fixed. Geologists discover new deposits all the time, for one thing. And what was considered too expensive and difficult to mine a few decades ago might be possible to extract with today’s technology. So instead of being slowly depleted, those material supplies have roughly kept up with production. 

“We are currently producing more metals than ever before and have more metal resources and reserves than ever before,” as Jowitt put it in his paper.  

And the question, he says, isn’t whether we’ll blow through what’s theoretically available on the planet, or even whether we’ll soon run out of material we can access and mine. It’s whether we’re willing to accept the social, ecological, and geopolitical consequences of how we mine today, and whether we might be able to change those for the better. Because we may be mining a lot more of some materials in the near future. 

Big digs

Demand for rare earths is expected to explode in the coming decades, driven largely by the increased need for neodymium-based magnets. These magnets, commonly made from a mixture of neodymium, iron, and boron with other elements sprinkled in, produce a stronger magnetic field with less material than other magnets available today. 

While demand for neo magnets will likely triple in the coming decade, global production of neodymium will only double, according to Adamas Intelligence, a consulting firm specializing in strategic metals and minerals. It can take close to a decade to build new mines, and those long lead times could contribute to a supply crunch, says Seaver Wang, climate co-­director at the Breakthrough Institute, an environmental think tank.

Short periods when demand outstrips supply can lead to volatility, high prices, and slower deployment of new technologies. In a time as fast-moving as our current energy transition, those challenging economic conditions could have far-reaching effects, potentially entrenching old technologies and stalling progress. 

But despite these expected challenges and the resulting potential for volatility, there is, in theory, plenty of neodymium to go around. Despite their name, most rare earth metals aren’t terribly rare. Many are about as abundant in Earth’s crust as copper, and neodymium is roughly 1,000 times more common in the crust than platinum or gold.

However, unlike those metals, rare earths aren’t often found in concentrated deposits. Getting one ton of metal concentrate can require moving a thousand tons of rocks.   

This mining and refining process can be technically complicated and environmentally damaging, in part because rare earth metals are chemically similar to each other and difficult to separate without using harsh chemicals, says Julie Klinger, an associate professor at the University of Delaware who studies the global market for these materials.

Extraction often relies on dissolving crushed-up ore in strong acid. Mines that don’t carefully contain the waste material and the used chemicals risk polluting local waterways. Rare earth mines also often need to handle radioactive waste, since elements like thorium and uranium are common in and around the minerals that are mined to extract rare earths.

There are efforts underway to mine without producing dangerous waste, and new sites are attempting to squeeze as much finished product out of their initial mined material as possible, reintroducing scraps back into the refining process so less ends up in the waste. Others are taking another look at waste from previous mining efforts. 

But some experts hope to entirely rethink material supply. Instead of extracting new materials, what if we look to what’s already been dug out of the ground? 

Around and around

Follow the path of many commonly used metals, and you’ll likely trace a straight line that leads from the mine to a product and, eventually, to some version of a trash can. In an effort to ease supply concerns and environmental damage, some experts are calling for a new way of using materials, one that focuses on reducing waste or eliminating it altogether. 

Such a system would bend the line that goes from mine to trash into a new shape, so extracted materials are in use for as long as possible—maybe even forever. A whole host of strategies can extend the lifetime of materials, from repairing and refurbishing products to disassembling them and recycling the metals in them once the products are beyond repair.

This can start well before products even get to consumers, by making the most of materials as they’re taken out of the ground. Where recycling really gets difficult is the point at which the materials have left a company and gone into devices, says Ikenna Nlebedim, a research scientist at Ames National Laboratory.

Follow the path of many commonly used metals, and you’ll likely trace a straight line that leads from the mine to a product and, eventually, to some version of a trash can.

Today, a small but difficult-to-quantify fraction of rare earth elements are recycled from products that have reached the end of their useful life. (Many in the industry put the figure at roughly 1%, though there’s little data available on rare earth collection, Nlebedim says.) With the looming increase in expected demand, several companies, including Noveon, REEcycle, and Cyclic Materials, are working to increase that amount, setting up the beginning of a recycling industry.

A major challenge for rising magnet recyclers is that magnets tend to make up a tiny fraction of a product’s total weight. Picking through heaps of products to recover them is an imperfect system, and magnet recyclers are left with other valuable materials that they have no interest in—and no effective process for isolating.

""
Neodymium nitrate photographed under polarized light.
GETTY IMAGES

In the future, economical recycling of rare earths might require a broader infrastructure for recycling the rest of a device, Nlebedim says. A centralized dismantling system would allow the recovery of materials like copper, gold, and platinum group metals that are often found in the same products as rare earths. This setup would allow more of the material in waste products to be reused than is possible now, when a company will go after the highest-­value, easiest-to-extract materials and toss the rest into a shredder. 

Casting a wider net to recover more materials could help create a more stable supply for metals. That could be a major help if the materials considered valuable in the future are different from the ones with the most value today.  

Quick shifts

Technology moves quickly, and many of the materials that are critical to us today weren’t even in use a century ago.

Just look at the history of Mountain Pass Mine, a rare earth mine in California. The mine’s critical product has changed every 20 years or so since production started in 1952, says Michael Rosenthal, cofounder and chief operating officer of MP Materials, the site’s owner.

In the 1960s, Mountain Pass produced the europium used in color television screens of the time. In the following decades the target was cerium, which was useful for the glass used in televisions with cathode ray tubes. Since CRTs have been replaced with new technology like LED screens, demand for cerium has decreased. Now the mine focuses on neodymium and praseodymium, another ingredient sometimes used in magnets.

Yet even as geologists are scouting new mines and companies are springing up to start building recycling systems, researchers are working to make rare earth magnets less central to our technological future, or maybe even obsolete. 

Today, neodymium is necessary in these powerful magnets to wrangle the electrons in iron so that they spin consistently in the same direction, producing a strong magnetic field. There aren’t any alternatives that can match their performance. 

However, there could be options on the way. Niron Magnetics is working to build iron nitride magnets, which produce a powerful magnetic field without the need for any rare earth metals. The company opened its first manufacturing facility in early 2024, and while its products can’t sub in for high-quality neo magnets just yet, there’s no fundamental reason they won’t be able to in the future. If Niron or other companies are able to develop new magnets, it could mean a shift in the rare earth market that quickly makes the current magnet recycling systems irrelevant. 

In a perfectly sustainable world, we would use and reuse materials dug out of the ground indefinitely. But as our technology shifts and our lives change, it can be difficult to end the loop where it began. Instead, our material economy may morph into the shape of a spiral. Resources may not end up quite where they started— rather, the system we’ve set up to extract and use them will continue to chase technological progress, maybe endlessly. 

Fighting for a future beyond the climate crisis

When it comes to climate breakdown and the extinction crisis, the question I get most often is: How can we have hope? 

People ask me this in a range of contexts—in Q&A sessions, in emails, and on podcasts and radio shows, whether I’m doing outreach for my novels, like A Children’s Bible or Dinosaurs, or for nonfiction like We Loved It All, my new memoir. I see numerous iterations of it in the media and my social feeds and hear accounts of its ubiquity from writer friends, scientist and lawyer colleagues, activists and community organizers.

I’ve thought about the impulse behind the asking and am left with the lingering sense that many of us tend, in this cultural moment, to privilege our feelings on these existential threats over reason, say, or moral virtue, or apparently antiquated notions of civic and collective duty. Feelings are the beacon we entrust with shining a path through the fog to guide us home—anger and aggrievement, maybe, on the right of the political spectrum, and on the left something akin to defensive self-righteousness. 

It’s almost as though we lay our fate at the feet of feelings and wait for deliverance.

In the realm of emotion, hope guards against despair, whose rationalized intellectual output is cynicism—a free pass out of the tension of grappling with our responsibility to the future, with the difficulty and possible unpleasantness of engagement and resistance. But like cynicism, hope is its own free pass, filling the space of subjectivity with a passive expectation of relief. For the most part “hope” functions as a unit of rhetoric, as amorphous as “happiness” or “freedom”: a shredded flag in the discourse around climate doomsaying and denial that can only droop over a citadel under relentless siege. If we rely on hope, we give up agency. And that may be seductive, but it’s also surrender.

It’s possible that feelings aren’t our most useful gift. Other animals have feelings too, yet they haven’t radically modified the planet toward unlivability; we’ve done so by pairing our feelings with the unique combination of capabilities that were our species’ answers to the pressures of evolution. These include communication and collaboration, the sophisticated languages we share, our ability to conceptualize a distant past and future and make tools with our opposable thumbs—capacities that, together, have allowed us to construct empires and complex machines and cast our intelligence into the deep sea and the far-off thermosphere. Even beyond the sun. 

Yet the mission we chose to undertake has been one guided by desire and by a framework of ideas we’ve built to justify projecting that desire into the appropriation and liquidation of our resource base. The result has been voracious production and reproduction. Over the course of just a handful of fast-moving centuries, that hysterical vector of taking and making has landed us in a state of emergency that suddenly appears, with a high degree of credibility, poised to bury us under the sea or burn us off the land: in effect to steam open our small envelope of life and peel our paper-thin atmosphere, forests and rivers, grasslands and tundra and reefs and polar icescapes and the creatures they sustain, right off the surface of the world.

To fathom the danger of our situation, to let its immediacy dawn on us and drive us to act, it’s true that emotion is required. But in the stable of emotions to which we have ready access, hope is a pale horse. To spark an understanding of our history of error and push us to reconceive and heal as passionately as we now lay waste, we need to embrace a more extraordinary recognition.

We need shock and awe in the face of the majesty and fragility of nature, humility in the face of the vastness of the transformations our kind has set in motion—a bristling realization of imminent peril, a visceral apprehension of the nonfungibility of our zone of life. Of this marvelous place, infinitesimal in the solar system if not the galaxy, that has given us, on the thin skin of a solitary planet, the combination of flowing water and breathable air that are the preconditions for life. 

Only awe can drive us to work as frenziedly from fear as, it might be argued, we’ve worked from greed until now.

More than ordinary emotions, we need an encounter with the shock of our finitude, a sensation of awe, reverence, and astonishment before the richness and precariousness of being. 

Ordinary emotions let us blunder through the onslaught of information in the slow befuddlement of a stubborn belief that the familiar is bound to persist. But without a swift, far-reaching, and cooperative global effort, the familiar will not persist. Social and political stability will vanish along with biological and geophysical vanishments—the disappearance of coral reefs, for instance, whose absence will denude the oceans of diversity, or the collapse of the AMOC, the Atlantic meridional overturning circulation, under the influx of fresh water from melting ice, which could render Northern Europe inhospitably cold, raise sea levels along the US Eastern Seaboard, and overheat the tropics. 

In the realm of emotion, awe is the prerequisite to action. Not hope. Only awe can drive us to work as frenziedly from fear as, it might be argued, we’ve worked from greed until now. And whether it’s music or nature or art or religion that leaves us awestruck or just a simple decision to suddenly, deeply notice the world beyond ourselves, each of these requires the suspension of chatter—a willingness to halt and stand still within the rushing momentum of daily life. 

If we wish to thrive beyond it, the next century will have to be a time of unmaking and remaking: unmaking the technologies and culture of fossil fuels and their massive, entrenched infrastructure and remaking our template for prosperity from one based on limitless growth into one aimed at accommodation to a delicate biosphere. This means, among other key policy steps, defending and funding reproductive rights, equity, and education both at home and abroad—chiefly for women, since women’s access to education is a central driver of the lower birth rates that will be crucial to living within our means. 

To champion makerdom alone as the answer is to add willful ignorance to hubris. It’s a fact that we need to manufacture and rapidly propagate better tools—energy and food delivery systems that don’t disintegrate our life support to fuel our daily activities—and, equally, it’s a lie that better making by itself can save us or the other life forms we depend on.

Less making and unmaking are also the solution—less making of what we do not need and more unmaking of harmful machines and ideas. The sprawling patrimony of bad ideas—that Homo sapiens reigns supreme over nature and so is miraculously independent of it, in defiance of ecology and physics; that market capitalism is the unassailable apogee of civilization and ongoing expansion the correct communal goal, including endless human procreation cheered on by neoliberal economists who whinge over declining birth rates in industrialized nations—should be dismantled as steadily as the destructive machines. 

Neither the United States nor the world community has mechanisms in place to adequately curb potentially catastrophic enterprise, either when that enterprise is demonstrably causing climate chaos or when it purports to meet the demand for fixes. Treaties made under international law have been famously toothless to date, while the US legal system, which does possess sharp teeth, defers to the legislative bounds established by a Congress deeply beholden to fossil fuels and related industries bent on maintaining the status quo. And that legal system, far from being disposed to address the exceptionally high public health and security risks posed by climate change and extinction, is clearly, through the recent stacking of courts with antigovernment and antiscience jurists, in the business of radically increasing its deference to private actors as it erodes the rights of the dispossessed and the power of federal oversight. 

If we in this country can’t rely on the legislative or judicial branches of our central government to tackle the crises of their own volition, while the executive branch directs, at best, movement toward renewables without movement away from fossils; if we can’t rely on the myopic and nihilistic companies dominating the energy sector to pivot anytime soon; then who remains to help us? To whom can we turn, we who exist, always and only, here and nowhere else, in this walled city of the Earth under such terrible siege? 

The answer may be, for now, only ourselves. Those of us who have language and believe in the wisdom science can offer. Who know the surpassing vulnerability of the rivers and prairies, the jungles and wetlands, the cypress swamps of South Florida, the Cape Floristic Region of South Africa, the Siberian taiga, the Tropical Andes, Madagascar, the island Caribbean. Who can gaze into the future and, beholding the prospect of a frightening and emptier world for our descendants, feel compelled to fight on behalf of the one we have. 

Lydia Millet is the author of more than a dozen novels, including A Children’s Bible; her most recent book, We Loved It All: A Memory of Life, is her first work of nonfiction.

The race to save our online lives from a digital dark age

There is a photo of my daughter that I love. She is sitting, smiling, in our old back garden, chubby hands grabbing at the cool grass. It was taken in 2013, when she was almost one, on an aging Samsung digital camera. I originally stored it on a laptop before transferring it to a chunky external hard drive.

A few years later, I uploaded it to Google Photos. When I search for the word ”grass,” Google’s algorithm pulls it up. It always makes me smile.

I pay Google £1.79 a month to keep my memories safe. That’s a lot of trust I’m putting in a company that’s existed for only 26 years. But the hassle it removes seems worth it. There’s just so much stuff nowadays. The admin required to keep it updated and stored safely is just too onerous.

My parents didn’t have this problem. They took occasional photos of me on a film camera and periodically printed them out on paper and put them in a photo album. These pictures are still viewable now, 40-odd years later, on faded yellowing photo paper—a few frames per year. 

Many of my memories from the following decades are also fixed on paper. The letters I received from my friends when traveling abroad in my 20s were handwritten on lined paper. I still have them crammed in a shoebox, an amusing but relatively small archive of an offline time.

We no longer have such space limitations. My iPhone takes thousands of photos a year. Our Instagram and TikTok feeds are constantly updated. We collectively send billions of WhatsApp messages and texts and emails and tweets.

But while all this data is plentiful, it’s also more ephemeral. One day in the maybe-not-so-distant future, YouTube won’t exist and its videos may be lost forever. Facebook—and your uncle’s holiday posts—will vanish. There is precedent for this. MySpace, the first largish-scale social network, deleted every photo, video, and audio file uploaded to it before 2016, seemingly inadvertently. Entire tranches of Usenet newsgroups, home to some of the internet’s earliest conversations, have gone offline forever and vanished from history. And in June this year, more than 20 years of music journalism disappeared when the MTV News archives were taken offline.

For many archivists, alarm bells are ringing. Across the world, they are scraping up defunct websites or at-risk data collections to save as much of our digital lives as possible. Others are working on ways to store that data in formats that will last hundreds, perhaps even thousands, of years. 

The endeavor raises complex questions. What is important to us? How and why do we decide what to keep—and what do we let go? 

And how will future generations make sense of what we’re able to save?

“Welcome to the challenge of every historian, archaeologist, novelist,” says Genevieve Bell, a cultural anthropologist. “How do you make sense of what’s left? And then how do you avoid reading it through the lens of the now?”

Last-chance saloon

There is more stuff being created now than at any time in history. At Google’s I/O conference this year, the firm’s CEO, Sundar Pichai, said that 6 billion photos and videos are uploaded to Google Photos every day. More than 40 million WhatsApp messages are sent every minute.

Even with so much more of it, though, our data is more fragile than ever. Books could burn in a freak library fire, but data is much easier to wipe forever. We’ve seen it happen—not only in incidents like the accidental deletion of MySpace data but also, sometimes, with intent. 

In 2009, Yahoo announced it was going to pull the plug on the web-hosting platform GeoCities, putting millions of carefully created web pages on the chopping block. While most of these pages might seem inconsequential—GeoCities was famous for its amateurish, early-web aesthetic and its pages dedicated to various collections, obsessions, or fandoms—they represented an early chapter of the web, and one that was about to be lost forever.

And it would have been, if a ragtag group of volunteer archivists led by Jason Scott hadn’t stepped in. 

“We sprang into action, and part of the fury and confusion of the time was we were going from downloading a handful of interesting sites to suddenly taking on an anchoring website of the early web,” Scott recalls.

His group, called Archive Team, quickly mobilized and downloaded as many GeoCities pages as possible before it closed for good. He and the team ended up being able to save most of the site, archiving millions of pages between April and October 2009. He estimates that they managed to download and store around a terabyte, but he notes that the size of GeoCities waxed and waned and was around nine terabytes at its peak. Much was likely gone for good. “It contained 100% user-generated works, folk art, and honest examples of human beings writing information and histories that were nowhere else,” he says.

Known for his top hat and cyberpunk-infused sense of style, Scott has made it his life’s mission to help save parts of the web that are at risk of being lost. “It is becoming more understood that archives, archiving, and preservation are a choice, a duty, and not something that just happens like the tides,” he says.

Scott now works as “free-range archivist and software curator” with the Internet Archive, an online library started in 1996 by the internet pioneer Brewster Kahle to save and store information that would otherwise be lost. 

As a society, we’re creating so much new stuff that we must always delete more things than we did the year before.

Over the past two decades, the Internet Archive has amassed a gigantic library of material scraped from around the web, including that GeoCities content. It doesn’t just save purely digital artifacts, either; it also has a vast collection of digitized books that it has scanned and rescued. Since it began, the Internet Archive has collected more than 145 petabytes of data, including more than 95 million public media files such as movies, images, and texts. It has managed to save almost half a million MTV news pages.

Its Wayback Machine, which lets users rewind to see how certain websites looked at any point in time, has more than 800 billion web pages stored and captures a further 650 million each day. It also records and stores TV channels from around the world and even saves TikToks and YouTube videos. They are all stored across multiple data centers that the Internet Archive owns itself.

It’s a Sisyphean task. As a society, we’re creating so much new stuff that we must always delete more things than we did the year before, says Jack Cushman, director at Harvard’s Library Innovation Lab, where he helps libraries and technologists learn from one another. We “have to figure out what gets saved and what doesn’t,” he says. “And how do we decide?”  

""
MIKE MCQUADE

Archivists have to make such decisions constantly. Which TikToks should we save for posterity, for example?

We shouldn’t try too hard to imagine what future historians would find interesting about us, says Niels Brügger, an internet researcher at Aarhus University in Denmark. “We cannot imagine what historians in 30 years’ time would like to study about today, because we don’t have a clue,” he says. “So we shouldn’t try to anticipate and sort of constrain the possible questions that future historians would ask.”

Instead, Brügger says, we should just save as much stuff as possible and let them figure it out later. “As a historian, I would definitely go for: Get it all, and then historians will find out what the hell they’re going to do with it,” he says.

At the Internet Archive, it’s the stuff most at risk of being lost that gets prioritized, says Jefferson Bailey, who works there helping develop archiving software for libraries and institutions. “Material that is ephemeral or at risk or has not yet been digitized and therefore is more easily destroyed, because it’s in analog or print format—those do get priority,” he says. 

People can request that pages be archived. Libraries and institutions also make nominations. And the staff sorts out the rest. Across open social media like TikTok and YouTube, archive teams at libraries around the world select certain accounts, copy what they want to save, and share those copies with the Internet Archive. It could be snapshots of what was trending each day, as well as tweets or videos from accounts run by notable individuals such as the US president.

The process can’t capture everything, but it offers a pretty good slice of what has preoccupied us in the early decades of the 21st century. While historical records have typically relied upon the private letters and belongings of society’s richest, an archive process that scrapes tweets is always going to be a bit more egalitarian.

“You can get a very interesting and diverse snapshot of our cultural moments of the last 30, 40 years,” says Bailey. “That is very different from what a traditional archive looked like 100 years ago.” 

As citizens, we could also help future historians. Brügger suggests people could make “data donations” of their personal correspondence to archives. “One week per year, invite everyone to donate the emails from that week,” he says. “If you had these time slices of email correspondence from thousands of people, year by year, that would be really great.”

Scott imagines future historians eventually using AI to query these archives to gain a unique insight into how we lived. “You’ll be able to ask a machine: ‘Could you show me images of people enjoying themselves at amusement parks with their families from the ’60s?’ and it will go, ‘Here you go,’” he says. “The work we did up to here was done in faith that something like this might exist.”

The past guides the future

Human knowledge doesn’t always disappear with a dramatic flourish like GeoCities; sometimes it is erased gradually. You don’t know something’s gone until you go back to check it. One example of this is “link rot,” where hyperlinks on the web no longer direct you to the right target, leaving you with broken pages and dead ends. A Pew Research Center study from May 2024 found that 23% of web pages that were around in 2013 are no longer accessible.

It’s not just web links that die without constant curation and care. Unlike paper, the formats that now store most of our data require certain software or hardware to run. And these tools can become obsolete quickly. Many of our files can no longer be read because the applications that read them are gone or the data has become corrupted, for example.

One way to mitigate this problem is to transfer important data to the latest medium on a regular basis, before the programs required to read it are lost forever. At the Internet Archive and other libraries, the way information is stored is refreshed every few years. But for data that is not being actively looked after, it may be only a few years before the hardware required to access it is no longer available. Think about once ubiquitous storage mediums like Zip drives or CompactFlash. 

Some researchers are looking into ways to make sure we can always access old digital formats, even if the kit required to read them has become a museum piece. The Olive project, run by Mahadev Satyanarayanan at Carnegie Mellon University, aims to make it possible for anyone to use any application, however old, “with just a click.” His team has been working since 2012 to create a huge, decentralized network that supports “virtual machines”—emulators for old or defunct operating systems and all the software that they run.

Keeping old data alive like this is a way to protect against what the computer scientist Danny Hillis once dubbed the “digital dark age,” a nod to the early medieval period when a lack of written material left future historians little to go on.

Hillis, an MIT alum who pioneered parallel computing, thinks the rapid technological upheaval of our time will leave much of what we’re living through a mystery to scholars. 

“As I get older, I keep thinking, how can I be a good ancestor?”

Vint Cerf, one of the internet’s founders

“When people look back at this period, they’ll say, ‘Oh, well, you know, here was this sort of incomprehensibly fast technological change, and a lot of history got lost during that change,” he says.

Hillis was one of the founders (along with Brian Eno and Stewart Brand) of the Long Now Foundation, a San Francisco–based organization that is known for its eye-catching art/science projects such as the Clock of the Long Now, a Jeff Bezos–funded gigantic mechanical clock currently under construction in a mountain in West Texas that is designed to keep accurate time for 10,000 years. It also created the Rosetta Disc, a circle of nickel that has been etched at microscopic scale with documentation for around 1,500 of the world’s languages. In February, a copy of the disc touched down on the moon aboard the Odysseus lander. Part of the Long Now’s focus is to help people think about how we protect our history for future generations. It’s not just about making life easier for historians. It’s about helping us be “better ancestors,” according to the organization’s mission statement.  

It’s a sentiment that chimes with Vint Cerf, one of the internet’s founders. “As I get older, I keep thinking, how can I be a good ancestor?” he says.

“An understanding of what has happened in the past is helpful for anticipating or interpreting what’s happening in the present and what might happen in the future,” says Cerf. There are “all kinds of scenarios where the absence of knowledge of the past is a debilitating weakness for a society.” 

“If we don’t remember, we can’t think, and the way that society remembers is by writing things down and putting them in libraries,” agrees Kahle. Without such repositories, he says, “people will be confused as to what’s true and not true.”

Kahle started the Internet Archive as a way to make sure all knowledge is free for anyone, but he feels the balance of power has tilted away from libraries and toward corporations. And that is likely to be a problem for keeping things accessible in the long term.

“If it’s left up to the corporations, it’s all gone,” he says. “Not only are we talking about classic published works—like your magazine, or books—but we’re talking about Facebook pages, Twitter pages, your personal blogs. All of those in general are on corporate platforms now. And those will all disappear.”

Losing our long-term digital archives has real implications for how society runs, says Harvard’s Cushman, who points out that our legal decisions and paperwork are largely stored digitally. Without a permanent, unalterable record, we can no longer rely on past judgments to inform the present. His team has created ways to let courts and law journals put copies of web pages on file at the Harvard Law Library, where they are stored indefinitely as a record of legal precedent. It’s also creating tools to let people interact with these archives by scrolling through historical versions of a site, or by using a custom GPT to interact with collections.

Many other groups are working on similar solutions. The US Library of Congress has suggested standards for storing video, audio, and web files so they are accessible for future generations. It urges archivists to think about issues such as whether the data includes instructions on how to access it, or how widely adopted the format has been (the idea being that a more prevalent one is less likely to become obsolete quickly).

But ultimately, digital archives are harder to keep than physical archives, says Cushman. “If you run out of budget and leave books in a quiet, dark room for 10 years, they’re happy,” he says. “If you fail to pay your AWS bill for a month, your files are gone forever.”

Storage for impossible time scales

Even the physical way we store digital data is impermanent. Most long-term storage in data centers—for use in disaster recovery, among other applications—is on magnetic hard drives or tape. Hard drives wear out after a few years. Tape is a little better, but it still doesn’t get you much beyond a decade or so of storage use before it begins to fail. 

Companies make new backups all the time, so this is less of a problem for the short-to-medium term. But when you want to store important cultural, legal, or historical information for the ages, you need to think differently. You need something that can store huge amounts of data but can also withstand the test of time and doesn’t need constant care. 

DNA has often been touted as a long-term storage option. It can store astonishing amounts of information and is incredibly long-lasting. Pieces of bone contain readable DNA from many hundreds of thousands of years ago. But encoding information in DNA is currently expensive and slow, and specialized equipment is required to “read” the information back later. That makes it impractical as a serious long-term backup for our world’s knowledge, at least for now.

""
MIKE MCQUADE

Luckily, there are already a handful of compelling alternatives. One of the most advanced ideas is Project Silica, currently under development at Microsoft Research in Cambridge, UK, where Richard Black and his team are creating a new form of long-term storage on glass squares that can last hundreds or even thousands of years.

Each one is created using a precise, powerful laser, which writes nanoscale deformations into the glass beneath the surface that can encode bits of information. These tiny imperfections are layered up on top of one another in the glass and are then read using a powerful microscope that can detect the way light is refracted and polarized. Machine learning is used to decode the bits, and each square has enough training data to let future historians retrain a model from scratch if required, says Black. 

When I hold one of the Silica squares in my hand, it feels pleasingly sci-fi, as if I’ve just pulled it out to shut down HAL in 2001: A Space Odyssey. The encoded data is visible as a faint blue where the light hits the imperfections and scatters. A video shared by Microsoft shows these squares being microwaved, boiled, baked in an oven, and zapped with a high-powered magnet, all with no apparent ill effects.

Black imagines Silica being used to store long-term scientific archives, such as medical information or weather data, over decades. Crucially, the technology can create archives that can be air-gapped (cut off from the internet) and need no power or special care. They can just be locked away in a silo and should work fine and be readable centuries from now. “Humanity has never stopped building microscopes,” says Black. In 2019 Warner Bros. archived some of its back catalogue on Silica glass, including the 1978 classic Superman

Black’s team has also designed a library storage system for Silica. Shelves packed with thousands of the glass squares line a small room at the Cambridge office. Handbag-size robots attached to the shelves whiz along them and occasionally stop, unclip themselves from one shelf, and clamber up or down to another before shooting off again down the line. When they reach a specific spot, they stop and pluck one of the squares, no bigger than a CD, from the shelf. Its contents are read and the robot zips back into position.

Meanwhile, deep in the vaults of an abandoned mine in Svalbard, Norway, GitHub is storing some of history’s most important software (including the source code for Linux, Android, and Python) on special film its creators claim can last for more than 500 years. The film, made by the firm Piql, is coated in microscopic silver halide crystals that permanently darken when exposed to light. A high-powered light source is used to create dark pixels just six micrometers across, which encode binary data. A scanner then reads the data back. Instructions for how to access the information are written in English on each roll, in case there is no longer anyone around to explain how it works. 

In addition to GitHub’s collection, the storage facility, known as the Arctic World Archive, also includes data supplied by the Vatican and the European Space Agency, as well as various artworks and images from governments and institutions around the world. Yale University, for example, has stored a collection of software, including Microsoft Office and Adobe, as Piql data. Just a few hundred meters down the road you find the Svalbard Global Seed Vault, a storage facility preserving a selection of the world’s biodiversity for future generations. Data about what each seed container holds is also stored on Piql film.

Making sure this information is stored in formats that can be decoded hundreds of years from now will be crucial. As Cushman points out, we still argue over the proper way to play Charlie Chaplin films because the intended playback speed was never recorded. “When researchers are trying to access these materials decades in the future, how expensive will it be to build tools to display them, and what will be the chances that we get it wrong?” he asks.

Ultimately, the motivation for all these projects is the idea that they will act as humanity’s backup. A long-term medium that will withstand an apocalypse, an electromagnetic pulse from the sun, the end of civilization, and let us start again. 

Something to let people know we were here.

Happy accidents

Sometime in the first century, a Roman woman called Claudia Severa was planning a big birthday party at a fort in northern England. She asked her servant to write out an invitation to one of her best friends on a wooden tablet and then signed it with a flourish. 

Claudia could never have suspected that, almost 2,000 years on, the Vindolanda Tablets (of which her invitation is the most famous) would be used to give us a unique insight into the daily lives of Romans in England at that time.

That’s always the way. Throughout history, the oddest, most random things survived to act as a guide for historians. The same will go for us. Despite the efforts of archivists, librarians, and storage researchers, it’s impossible to know for sure what data will still be accessible when we’re long gone. And we might be surprised at what they find interesting when they come across it. Which batch of archived emails or TikToks will be the key to unlocking our era for future historians and anthropologists? And what will they think of us?

Historians foraging through our digital detritus may be left with a series of unanswerable questions, and they’ll just have to make best guesses. 

Throughout history, the oddest, most random things survived to act as a guide for historians. The same will go for us.

“You’d need to ask about who had digital technology,” says Bell. “And how did they power it? And who got to make choices about it? And how was it stored and circulated? And who saw it?”

We don’t know what will still be running 20, 50, or 100 years from now. Perhaps Google Photos’ cloud storage will have been abandoned, a giant garbage pile of old hard drives buried in the ground. Or maybe, with luck, one of the spiritual heirs to Scott’s archivists will have saved it before it went down. 

Maybe someone downloaded it onto some sort of glass disc and stashed it in a vault somewhere.

Maybe some future anthropologist will one day find it, dust it off, and find that it’s still readable. 

Maybe they’ll select a file at random, spin up some sort of software emulator, and find a billion photos from 2013. 

And see a chubby, happy girl sitting in the grass.

baby sitting in grass
NIALL FIRTH

Happy birthday, baby! What the future holds for those born today

Happy birthday, baby.

You have been born into an era of intelligent machines. They have watched over you almost since your conception. They let your parents listen in on your tiny heartbeat, track your gestation on an app, and post your sonogram on social media. Well before you were born, you were known to the algorithm. 

Your arrival coincided with the 125th anniversary of this magazine. With a bit of luck and the right genes, you might see the next 125 years. How will you and the next generation of machines grow up together? We asked more than a dozen experts to imagine your joint future. We explained that this would be a thought experiment. What I mean is: We asked them to get weird. 

Just about all of them agreed on how to frame the past: Computing shrank from giant shared industrial mainframes to personal desktop devices to electronic shrapnel so small it’s ambient in the environment. Previously controlled at arm’s length through punch card, keyboard, or mouse, computing became wearable, moving onto—and very recently into—the body. In our time, eye or brain implants are only for medical aid; in your time, who knows? 

In the future, everyone thinks, computers will get smaller and more plentiful still. But the biggest change in your lifetime will be the rise of intelligent agents. Computing will be more responsive, more intimate, less confined to any one platform. It will be less like a tool, and more like a companion. It will learn from you and also be your guide.

What they mean, baby, is that it’s going to be your friend.

Present day to 2034 
Age 0 to 10

When you were born, your family surrounded you with “smart” things: rockers, monitors, lamps that play lullabies.  

DAVID BISKUP

But not a single expert name-checked those as your first exposure to technology. Instead, they mentioned your parents’ phone or smart watch. And why not? As your loved ones cradle you, that deliciously blinky thing is right there. Babies learn by trial and error, by touching objects to see what happens. You tap it; it lights up or makes noise. Fascinating!

Cognitively, you won’t get much out of that interaction between birth and age two, says Jason Yip, an associate professor of digital youth at the University of Washington. But it helps introduce you to a world of animate objects, says Sean Follmer, director of the SHAPE Lab in Stanford’s mechanical engineering department, which explores haptics in robotics and computing. If you touch something, how does it respond?

You are the child of millennials and Gen Z—digital natives, the first influencers. So as you grow, cameras are ubiquitous. You see yourself onscreen and learn to smile or wave to the people on the other side. Your grandparents read to you on FaceTime; you photobomb Zoom meetings. As you get older, you’ll realize that images of yourself are a kind of social currency. 

Your primary school will certainly have computers, though we’re not sure how educators will balance real-world and onscreen instruction, a pedagogical debate today. But baby, school is where our experts think you will meet your first intelligent agent, in the form of a tutor or coach. Your AI tutor might guide you through activities that combine physical tasks with augmented-­reality instruction—a sort of middle ground. 

Some school libraries are becoming more like makerspaces, teaching critical thinking along with building skills, says Nesra Yannier, a faculty member in the Human-Computer Interaction Institute at Carnegie Mellon University. She is developing NoRILLA, an educational system that uses mixed reality—a combination of physical and virtual reality—to teach science and engineering concepts. For example, kids build wood-block structures and predict, with feedback from a cartoon AI gorilla, how they will fall. 

Learning will be increasingly self-­directed, says Liz Gerber, co-director of the Center for Human-Computer Interaction and Design at Northwestern University. The future classroom is “going to be hyper-­personalized.” AI tutors could help with one-on-one instruction or repetitive sports drills. 

All of this is pretty novel, so our experts had to guess at future form factors. Maybe while you’re learning, an unobtrusive bracelet or smart watch tracks your performance and then syncs data with a tablet, so your tutor can help you practice. 

What will that agent be like? Follmer, who has worked with blind and low-vision students, thinks it might just be a voice. Yannier is partial to an animated character. Gerber thinks a digital avatar could be paired with a physical version, like a stuffed animal—in whatever guise you like. “It’s an imaginary friend,” says Gerber. “You get to decide who it is.” 

Not everybody is sold on the AI tutor. In Yip’s research, kids often tell him AI-enabled technologies are … creepy. They feel unpredictable or scary or like they seem to be watching

Kids learn through social interactions, so he’s also worried about technologies that isolate. And while he thinks AI can handle the cognitive aspects of tutoring, he’s not sure about its social side. Good teachers know how to motivate, how to deal with human moods and biology. Can a machine tell when a child is being sarcastic, or redirect a kid who is goofing off in the bathroom? When confronted with a meltdown, he asks, “is the AI going to know this kid is hungry and needs a snack?”

2040
Age 16

By the time you turn 16, you’ll likely still live in a world shaped by cars: highways, suburbs, climate change. But some parts of car culture may be changing. Electric chargers might be supplanting gas stations. And just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.  

Paola Meraz, a creative director of interaction design at BMW’s Designworks, describes that agent as “your friend on the road.” William Chergosky, chief designer at Calty Design Research, Toyota’s North American design studio, calls it “exactly like a friend in the car.”

While you are young, Chergosky says, it’s your chaperone, restricting your speed or routing you home at curfew. It tells you when you’re near In-N-Out, knowing your penchant for their animal fries. And because you want to keep up with your friends online and in the real world, the agent can comb your social media feeds to see where they are and suggest a meetup. 

Just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.

Cars have long been spots for teen hangouts, but as driving becomes more autonomous, their interiors can become more like living rooms. (You’ll no longer need to face the road and an instrument panel full of knobs.) Meraz anticipates seats that reposition so passengers can talk face to face, or game. “Imagine playing a game that interacts with the world that you are driving through,” she says, or “a movie that was designed where speed, time of day, and geographical elements could influence the storyline.” 

people riding on top of a smart car
DAVID BISKUP

Without an instrument panel, how do you control the car? Today’s minimalist interiors feature a dash-mounted tablet, but digging through endless onscreen menus is not terribly intuitive. The next step is probably gestural or voice control—ideally, through natural language. The tipping point, says Chergosky, will come when instead of giving detailed commands, you can just say: “Man, it is hot in here. Can you make it cooler?”

An agent that listens in and tracks your every move raises some strange questions. Will it change personalities for each driver? (Sure.) Can it keep a secret? (“Dad said he went to Taco Bell, but did he?” jokes Chergosky.) Does it even have to stay in the car? 

Our experts say nope. Meraz imagines it being integrated with other kinds of agents—the future versions of Alexa or Google Home. “It’s all connected,” she says. And when your car dies, Chergosky says, the agent does not. “You can actually take the soul of it from vehicle to vehicle. So as you upgrade, it’s not like you cut off that relationship,” he says. “It moves with you. Because it’s grown with you.”

2049
Age 25

By your mid-20s, the agents in your life know an awful lot about you. Maybe they are, indeed, a single entity that follows you across devices and offers help where you need it. At this point, the place where you need the most help is your social life. 

Kathryn Coduto, an assistant professor of media science at Boston University who studies online dating, says everyone’s big worry is the opening line. To her, AI could be a disembodied Cyrano that whips up 10 options or workshops your own attempts. Or maybe it’s a dating coach. You agree to meet up with a (real) person online, and “you have the AI in a corner saying ‘Hey, maybe you should say this,’ or ‘Don’t forget this.’ Almost like a little nudge.”

“There is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

T. Makana Chock, director, the Extended Reality Lab, Syracuse University

Virtual first dates might solve one of our present-day conundrums: Apps make searching for matches easier, but you get sparse—and perhaps inaccurate—info about those people. How do you know who’s worth meeting in real life? Building virtual dating into the app, Coduto says, could be “an appealing feature for a lot of daters who want to meet people but aren’t sure about a large initial time investment.”

T. Makana Chock, who directs the Extended Reality Lab at Syracuse University, thinks things could go a step further: first dates where both parties send an AI version of themselves in their place. “That would tell both of you that this is working—or this is definitely not going to work,” Chock says. If the date is a dud—well, at least you weren’t on it.

Or maybe you will just date an entirely virtual being, says Sun Joo (Grace) Ahn, who directs the Center for Advanced Computer-Human Ecosystems at the University of Georgia. Or you’ll go to a virtual party, have an amazing time, “and then later on you realize that you were the only real human in that entire room. Everybody else was AI.”

This might sound odd, says Ahn, but “humans are really good at building relationships with nonhuman entities.” It’s why you pour your heart out to your dog—or treat ChatGPT like a therapist. 

There is a problem, though, when virtual relationships become too accommodating, says Chock: If you get used to agents that are tailored to please you, you get less skilled at dealing with real people and risking awkwardness or rejection. “You still need to have human interaction,” she says. “And there is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

By now, social media, online dating, and livestreaming have likely intertwined and become more immersive. Engineers have shrunk the obstacles to true telepresence: internet lag time, the uncanny valley, and clunky headsets, which may now be replaced by something more like glasses or smart contact lenses. 

Online experiences may be less like observing someone else’s life and more like living it. Imagine, says Follmer: A basketball star wears clothing and skin sensors that track body position, motion, and forces, plus super-thin gloves that sense the texture of the ball. You, watching from your couch, wear a jersey and gloves made of smart textiles, woven with actuators that transmit whatever the player feels. When the athlete gets shoved, Follmer says, your fan gear can really shove you right back.”

Gaming is another obvious application. But it’s not the likely first mover in this space. Nobody else wants to say this on the record, so I will: It’s porn. (Baby, ask your parents and/or AI tutor when you’re older.)

DAVID BISKUP

By your 20s, you are probably wrestling with the dilemmas of a life spent online and on camera. Coduto thinks you might rebel, opting out of social media because your parents documented your first 18 years without permission. As an adult, you’ll want tighter rules for privacy and consent, better ways to verify authenticity, and more control over sensitive materials, like a button that could nuke your old sexts.

But maybe it’s the opposite: Now you are an influencer yourself. If so, your body can be your display space. Today, wearables are basically boxes of electronics strapped onto limbs. Tomorrow, hopes Cindy Hsin-Liu Kao, who runs the Hybrid Body Lab at Cornell University, they will be more like your own skin. Kao develops wearables like color-changing eyeshadow stickers and mini nail trackpads that can control a phone or open a car door. In the not-too-distant future, she imagines, “you might be able to rent out each of your fingernails as an ad for social media.” Or maybe your hair: Weaving in super-thin programmable LED strands could make it a kind of screen. 

What if those smart lenses could be display spaces too? “That would be really creepy,” she muses. “Just looking into someone’s eyes and it’s, like, CNN.”

2059
Age 35

By now, you’ve probably settled into domestic life—but it might not look much like the home you grew up in. Keith Evan Green, a professor of human-centered design at Cornell, doesn’t think we should imagine a home of the future. “I would call it a room of the future,” he says, because it will be the place for everything—work, school, play. This trend was hastened by the covid pandemic.

Your place will probably be small if you live in a big city. The uncertainties of climate change and transportation costs mean we can’t build cities infinitely outward. So he imagines a reconfigurable architectural robotic space: Walls move, objects inflate or unfold, furniture appears or dissolves into surfaces or recombines. Any necessary computing power is embedded. The home will finally be what Le Corbusier imagined: a machine for living in.

Green pictures this space as spartan but beautiful, like a temple—a place, he says, to think and be. “I would characterize it as this capacious monastic cell that is empty of most things but us,” he says.

Our experts think your home, like your car, will respond to voice or gestural control. But it will make some decisions autonomously, learning by observing you: your motion, location, temperature. 

Ivan Poupyrev, CEO and cofounder of Archetype AI, says we’ll no longer control each smart appliance through its own app. Instead, he says, think of the home as a stage and you as the director. “You don’t interact with the air conditioner. You don’t interact with a TV,” he says. “You interact with the home as a total.” Instead of telling the TV to play a specific program, you make high-level demands of the entire space: “Turn on something interesting for me; I’m tired.” Or: “What is the plan for tomorrow?”

Stanford’s Follmer says that just as computing went from industrial to personal to ubiquitous, so will robotics. Your great-grandparents envisioned futuristic homes cared for by a single humanoid robot—like Rosie from The Jetsons. He envisions swarms of maybe 100 bots the size of quarters that materialize to clean, take out the trash, or bring you a cold drink. (“They know ahead of time, even before you do, that you’re thirsty,” he says.)

DAVID BISKUP

Baby, perhaps now you have your own baby. The technologies of reproduction have changed since you were born. For one thing, says Gerber, fertility tracking will be way more accurate: “It is going to be like weather prediction.” Maybe, Kao says, flexible fabric-like sensors could be embedded in panty liners to track menstrual health. Or, once the baby arrives, in nipple stickers that nursing parents could apply to track biofluid exchange. If the baby has trouble latching, maybe the sticker’s capacitive touch sensors could help the parent find a better position.

Also, goodbye to sleep deprivation. Gerber envisions a device that, for lack of an existing term, she’s calling a“baby handler”—picture an exoskeleton crossed with a car seat. It’s a late-night soothing machine that rocks, supplies pre-pumped breast milk, and maybe offers a bidet-like “cleaning and drying situation.”For your children, perhaps, this is their first experience of being close to a machine. 

2074
Age 50

Now you are at the peak of your career. For professions heading toward AI automation, you may be the “human in the loop” who oversees a machine doing its tasks. The 9-to-5 workday, which is crumbling in our time, might be totally atomized into work-from-home fluidity or earn-as-you-go gig work.

Ahn thinks you might start the workday by lying in bed and checking your messages—on an implanted contact lens. Everyone loves a big screen, and putting it in your eye effectively gives you “the largest monitor in the world,” she says. 

You’ve already dabbled with AI selves for dating. But now virtual agents are more photorealistic, and they can mimic your voice and mannerisms. Why not make one go to meetings for you?

DAVID BISKUP

Kori Inkpen, who studies human-­computer interaction at Microsoft Research, calls this your “ditto”—more formally, an embodied mimetic agent, meaning it represents a specific person. “My ditto looks like me, acts like me, sounds like me, knows sort of what I know,” she says. You can instruct it to raise certain points and recap the conversation for you later. Your colleagues feel as if you were there, and you get the benefit of an exchange that’s not quite real time, but not as asynchronous as email. “A ditto starts to blend this reality,” Inkpen says.

In our time, augmented reality is slowly catching on as a tool for workers whose jobs require physical presence and tangible objects. But experts worry that once the last baby boomers retire, their technical expertise will go with them. Perhaps they can leave behind a legacy of training simulations.

Inkpen sees DIY opportunities. Say your fridge breaks. Instead of calling a repair person, you boot up an AR tutorial on glasses, a tablet, or a projection that overlays digital instructions atop the appliance. Follmer wonders if haptic sensors woven into gloves or clothing would let people training for highly specialized jobs—like surgery—literally feel the hand motions of experienced professionals.

For Poupyrev, the implications are much bigger. One way to think about AI is “as a storage medium,” he says. “It’s a preservation of human knowledge.” A large language model like ChatGPT is basically a compendium of all the text information people have put online. Next, if we feed models not only text but real-world sensor data that describes motion and behavior, “it becomes a very compressed presentation not of just knowledge, but also of how people do things.” AI can capture how to dance, or fix a car, or play ice hockey—all the skills you cannot learn from words alone—and preserve this knowledge for the future.

2099
Age 75

By the time you retire, families may be smaller, with more older people living solo. 

Well, sort of. Chaiwoo Lee, a research scientist at the MIT AgeLab, thinks that in 75 years, your home will be a kind of roommate—“someone who cohabitates that space with you,” she says. “It reacts to your feelings, maybe understands you.” 

By now, a home’s AI could be so good at deciphering body language that if you’re spending a lot of time on the couch, or seem rushed or irritated, it could try to lighten your mood. “If it’s a conversational agent, it can talk to you,” says Lee. Or it might suggest calling a loved one. “Maybe it changes the ambiance of the home to be more pleasant.”

The home is also collecting your health data, because it’s where you eat, shower, and use the bathroom. Passive data collection has advantages over wearable sensors: You don’t have to remember to put anything on. It doesn’t carry the stigma of sickness or frailty. And in general, Lee says, people don’t start wearing health trackers until they are ill, so they don’t have a comparative baseline. Perhaps it’s better to let the toilet or the mirror do the tracking continuously. 

Green says interactive homes could help people with mobility and cognitive challenges live independently for longer. Robotic furnishings could help with lifting, fetching, or cleaning. By this time, they might be sophisticated enough to offer support when you need it and back off when you don’t.  

Kao, of course, imagines the robotics embedded in fabric: garments that stiffen around the waist to help you stand, a glove that reinforces your grip.

DAVID BISKUP

If getting from point A to point B is becoming difficult, maybe you can travel without going anywhere. Green, who favors a blank-slate room, wonders if you’ll have a brain-machine interface that lets you change your surroundings at will. You think about, say, a jungle, and the wallpaper display morphs. The robotic furniture adjusts its topography. “We want to be able to sit on the boulder or lie down on the hammock,” he says.

Anne Marie Piper, an associate professor of informatics at UC Irvine who studies older adults, imagines something similar—minus the brain chip—in the context of a care home, where spaces could change to evoke special memories, like your honeymoon in Paris. “What if the space transforms into a café for you that has the smells and the music and the ambience, and that is just a really calming place for you to go?” she asks. 

Gerber is all for virtual travel: It’s cheaper, faster, and better for the environment than the real thing. But she thinks that for a truly immersive Parisian experience, we’ll need engineers to invent … well, remote bread. Something that lets you chew on a boring-yet-nutritious source of calories while stimulating your senses so you get the crunch, scent, and taste of the perfect baguette.

2149
Age 125

We hope that your final years will not be lonely or painful. 

Faraway loved ones can visit by digital double, or send love through smart textiles: Piper imagines a scarf that glows or warms when someone is thinking of you, Kao an on-skin device that simulates the touch of their hand. If you are very ill, you can escape into a soothing virtual world. Judith Amores, a senior researcher at Microsoft Research, is working on VR that responds to physiological signals. Today, she immerses hospital patients in an underwater world of jellyfish that pulse at half of an average person’s heart rate for a calming effect. In the future, she imagines, VR will detect anxiety without requiring a user to wear sensors—maybe by smell.

“It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms.”

Tim Recuber, sociologist, Smith College

You might be pondering virtual immortality. Tim Recuber, a sociologist at Smith College and author of The Digital Departed, notes that today people create memorial websites and chatbots, or sign up for post-mortem messaging services. These offer some end-of-life comfort, but they can’t preserve your memory indefinitely. Companies go bust. Websites break. People move on; that’s how mourning works.

What about uploading your consciousness to the cloud? The idea has a fervent fan base, says Recuber. People hope to resurrect themselves into human or robotic bodies, or spend eternity as part of a hive mind or “a beam of laser light that can travel the cosmos.” But he’s skeptical that it’ll work, especially within 125 years. Plus, what if being a ghost in the machine is dreadful? “Embodiment is, as far as we know, a pretty key component to existence. And it might be pretty upsetting to actually be a full version of yourself in a computer,” he says. 

DAVID BISKUP

There is perhaps one last thing to try. It’s another AI. You curate this one yourself, using a lifetime of digital ephemera: your videos, texts, social media posts. It’s a hologram, and it hangs out with your loved ones to comfort them when you’re gone. Perhaps it even serves as your burial marker. “It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms,” Recuber says.

It won’t exist forever. Nothing does. But by now, maybe the agent is no longer your friend.

Maybe, at last, it is you.

Baby, we have caveats.

We imagine a world that has overcome the worst threats of our time: a creeping climate disaster; a deepening digital divide; our persistent flirtation with nuclear war; the possibility that a pandemic will kill us quickly, that overly convenient lifestyles will kill us slowly, or that intelligent machines will turn out to be too smart

We hope that democracy survives and these technologies will be the opt-in gadgetry of a thriving society, not the surveillance tools of dystopia. If you have a digital twin, we hope it’s not a deepfake. 

You might see these sketches from 2024 as a blithe promise, a warning, or a fever dream. The important thing is: Our present is just the starting point for infinite futures. 

What happens next, kid, depends on you. 


Kara Platoni is a science reporter and editor in Oakland, California.

❌