Reading view

There are new articles available, click to refresh the page.

Why virologists are getting increasingly nervous about bird flu

Bird flu has been spreading in dairy cows in the US—and the scale of the spread is likely to be far worse than it looks. In addition, 14 human cases have been reported in the US since March. Both are worrying developments, say virologists, who fear that the country’s meager response to the virus is putting the entire world at risk of another pandemic.

The form of bird flu that has been spreading over the last few years has been responsible for the deaths of millions of birds and tens of thousands of marine and land mammals. But infections in dairy cattle, first reported back in March, brought us a step closer to human spread. Since then, the situation has only deteriorated. The virus appears to have passed from cattle to poultry on multiple occasions. “If that virus sustains in dairy cattle, they will have a problem in their poultry forever,” says Thomas Peacock, a virologist at the Pirbright Institute in Woking, UK.

Worse, this form of bird flu that is now spreading among cattle could find its way back into migrating birds. It might have happened already. If that’s the case, we can expect these birds to take the virus around the world.

“It’s really troubling that we’re not doing enough right now,” says Seema Lakdawala, a virologist at the Emory University School of Medicine in Atlanta, Georgia. “I am normally very moderate in terms of my pandemic-scaredness, but the introduction of this virus into cattle is really troubling.”

Not just a flu for birds

Bird flu is so named because it spreads stably in birds. The type of H5N1 that has been decimating bird populations for the last few years was first discovered in the late 1990s. But in 2020, H5N1 began to circulate in Europe “in a big way,” says Peacock. The virus spread globally, via migrating ducks, geese, and other waterfowl. In a process that took months and years, the virus made it to the Americas, Africa, Asia, and eventually even Antarctica, where it was detected earlier this year.

And while many ducks and geese seem to be able to survive being infected with the virus, other bird species are much more vulnerable. H5N1 is especially deadly for chickens, for example—their heads swell, they struggle to breathe, and they experience extreme diarrhea. Seabirds like puffins and guillemots also seem to be especially susceptible to the virus, although it’s not clear why. Over the last few years, we’ve seen the worst ever outbreak of bird flu in birds. Millions of farmed birds have died, and an unknown number of wild birds—in the tens of thousands at the very least—have also succumbed. “We have no idea how many just fell into the sea and were never seen again,” says Peacock.

Alarmingly, animals that hunt and scavenge affected birds have also become infected with the virus. The list of affected mammals includes bears, foxes, skunks, otters, dolphins, whales, sea lions, and many more. Some of these animals appear to be able to pass the virus to other members of their species. In 2022, an outbreak of H5N1 in sea lions that started in Chile spread to Argentina and eventually to Uruguay and Brazil. At least 30,000 died. The sea lions may also have passed the virus to nearby elephant seals in Argentina, around 17,000 of which have succumbed to the virus.

This is bad news—not just for the affected animals, but for people, too. It’s not just a bird flu anymore. And when a virus can spread in other mammals, it’s a step closer to being able to spread in humans. That is even more likely when the virus spreads in an animal that people tend to spend a lot of time interacting with.

This is partly why the virus’s spread in dairy cattle is so troubling. The form of the virus that is spreading in cows is slightly different from the one that had been circulating in migrating birds, says Lakdawala. The mutations in this virus have likely enabled it to spread more easily among the animals.

Evidence suggests that the virus is spreading through the use of shared milking machinery within cattle herds. Infected milk can contaminate the equipment, allowing the virus to infect the udder of another cow. The virus is also spreading between herds, possibly by hitching a ride on people that work on multiple farms, or via other animals, or potentially via airborne droplets.

Milk from infected cows can look thickened and yogurt-like, and farmers tend to pour it down drains. This ends up irrigating farms, says Lakdawala. “Unless the virus is inactivated, it just remains infectious in the environment,” she says. Other animals could be exposed to the virus this way.

Hidden infections

So far, 14 states have reported a total of 208 infected cattle herds. Some states have reported only one or two cases among their cattle. But this is extremely unlikely to represent the full picture, given how rapidly the virus is spreading among herds in states that are doing more testing, says Peacock. In Colorado, where state-licensed dairy farms that sell pasteurized milk are required to submit milk samples for weekly testing, 64 herds have been reported to be affected. Neighboring Wyoming, which does not have the same requirements, has reported only one affected herd.

We don’t have a good idea of how many people have been infected either, says Lakdawala. The official count from the CDC is 14 people since April 2024, but testing is not routine, and because symptoms are currently fairly mild in people, we’re likely to be missing a lot of cases.

“It’s very frustrating, because there are just huge gaps in the data that’s coming out,” says Peacock. “I don’t think it’s unfair to say that a lot of outside observers don’t think this outbreak is being taken particularly seriously.”

And the virus is already spreading from cows back into wild birds and poultry, says Lakdawala: “There is definitely a concern that the virus is going to [become more widespread] in birds and cattle … but also other animals that ruminate, like goats.”

It may already be too late to rid America’s cattle herds of the bird flu virus. If it continues to circulate, it could become stable in the population. This is what has happened with flu in pigs around the world. That could also spell disaster—not only would the virus represent a constant risk to humans and other animals that come into contact with the cows, but it could also evolve over time. We can’t predict how this evolution might take shape, but there’s a chance the result could be a form of the virus that is better at spreading in people or causing fatal infections.

So far, it is clear that the virus has mutated but hasn’t yet acquired any of these more dangerous mutations, says Michael Tisza, a bioinformatics scientist at Baylor College of Medicine in Houston. That being said, Tisza and his colleagues have been looking for the virus in wastewater from 10 cities in Texas—and they have found H5N1 in all of them.

Tisza and his colleagues don’t know where this virus is coming from—whether it’s coming from birds, milk, or infected people, for example. But the team didn’t find any signal of the virus in wastewater during 2022 or 2023, when there were outbreaks in migratory birds and poultry. “In 2024, it’s been a different story,” says Tisza. “We’ve seen it a lot.”

Together, the evidence that the virus is evolving and spreading among mammals, and specifically cattle, has put virologists on high alert. “This virus is not causing a human pandemic right now, which is great,” says Tisza. “But it is a virus of pandemic potential.”

Neuroscientists and architects are using this enormous laboratory to make buildings better

Have you ever found yourself lost in a building that felt impossible to navigate? Thoughtful building design should center on the people who will be using those buildings. But that’s no mean feat.

It’s not just about navigation, either. Just think of an office that left you feeling sleepy or unproductive, or perhaps a health center that had a less-than-reviving atmosphere. A design that works for some people might not work for others. People have different minds and bodies, and varying wants and needs. So how can we factor them all in?

To answer that question, neuroscientists and architects are joining forces at an enormous laboratory in East London—one that allows researchers to build simulated worlds. In this lab, scientists can control light, temperature, and sound. They can create the illusion of a foggy night, or the tinkle of morning birdsong.

And they can study how volunteers respond to these environments, whether they be simulations of grocery stores, hospitals, pedestrian crossings, or schools. That’s how I found myself wandering around a fake art gallery, wearing a modified baseball cap with a sensor that tracked my movements.

I first visited the Person-Environment-Activity Research Lab, referred to as PEARL, back in July. I’d been chatting to Hugo Spiers, a neuroscientist based at University College London, about the use of video games to study how people navigate. Spiers had told me he was working on another project: exploring how people navigate a lifelike environment, and how they respond during evacuations (which, depending on the situation, could be a matter of life or death).

For their research, Spiers and his colleagues set up what they call a “mocked-up art gallery” within PEARL. The center in its entirety is pretty huge as labs go, measuring around 100 meters in length and 40 meters across, with 10-meter-high ceilings in places. There’s no other research center in the world like this, Spiers told me.

The gallery setup looked a little like a maze from above, with a pathway created out of hanging black sheets. The exhibits themselves were videos of dramatic artworks that had been created by UCL students.

When I visited in July, Spiers and his colleagues were running a small pilot study to trial their setup. As a volunteer participant, I was handed a numbered black cap with a square board on top, marked with a large QR code. This code would be tracked by cameras above and around the gallery. The cap also carried a sensor, transmitting radio signals to devices around the maze that could pinpoint my location within a range of 15 centimeters.

At first, all the volunteers (most of whom seemed to be students) were asked to explore the gallery as we would any other. I meandered around, watching the videos, and eavesdropping on the other volunteers, who were chatting about their research and upcoming dissertation deadlines. It all felt pretty pleasant and calm.

That feeling dissipated in the second part of the experiment, when we were each given a list of numbers, told that each one referred to a numbered screen, and informed that we had to visit all the screens in the order in which they appeared on our lists. “Good luck, everybody,” Spiers said.

Suddenly everyone seemed to be rushing around, slipping past each other and trying to move quickly while avoiding collisions. “It’s all got a bit frantic, hasn’t it?” I heard one volunteer comment as I accidentally bumped into another. I hadn’t managed to complete the task by the time Spiers told us the experiment was over. As I walked to the exit, I noticed that some people were visibly out of breath.

The full study took place on Wednesday, September 11. This time, there were around 100 volunteers (I wasn’t one of them). And while almost everyone was wearing a modified baseball cap, some had more complicated gear, including EEG caps to measure brainwaves, or caps that use near-infrared spectroscopy to measure blood flow in the brain. Some people were even wearing eye-tracking devices that monitored which direction they were looking.

“We will do something quite remarkable today,” Spiers told the volunteers, staff, and observers as the experiment started. Taking such detailed measurements from so many individuals in such a setting represented “a world first,” he said.

I have to say that being an observer was much more fun than being a participant. Gone was the stress of remembering instructions and speeding around a maze. Here in my seat, I could watch as the data collected from the cameras and sensors was projected onto a screen. The volunteers, represented as squiggly colored lines, made their way through the gallery in a way that reminded me of the game Snake.

The study itself was similar to the pilot study, although this time the volunteers were given additional tasks. At one point, they were given an envelope with the name of a town or city in it, and asked to find others in the group who had been given the same one. It was fascinating to see the groups form. Some had the names of destination cities like Bangkok, while others had been assigned fairly nondescript English towns like Slough, made famous as the setting of the British television series The Office. At another point, the volunteers were asked to evacuate the gallery from the nearest exit.

The data collected in this study represents something of a treasure trove for researchers like Spiers and his colleagues. The team is hoping to learn more about how people navigate a space, and whether they move differently if they are alone or in a group. How do friends and strangers interact, and does this depend on whether they have certain types of material to bond over? How do people respond to evacuations—will they take the nearest exit as directed, or will they run on autopilot to the exit they used to enter the space in the first place?

All this information is valuable to neuroscientists like Spiers, but it’s also useful to architects like his colleague Fiona Zisch, who is based at UCL’s Bartlett School of Architecture. “We do really care about how people feel about the places we design for them,” Zisch tells me. The findings can guide not only the construction of new buildings, but also efforts to modify and redesign existing ones.

PEARL was built in 2021 and has already been used to help engineers, scientists, and architects explore how neurodivergent people use grocery stores, and the ideal lighting to use for pedestrian crossings, for example. Zisch herself is passionate about creating equitable spaces—particularly for health and education—that everyone can make use of in the best possible way.

In the past, models used in architecture have been developed with typically built, able-bodied men in mind. “But not everyone is a 6’2″ male with a briefcase,” Zisch tells me. Age, gender, height, and a range of physical and psychological factors can all influence how a person will use a building. “We want to improve not just the space, but the experience of the space,” says Zisch. Good architecture isn’t just about creating stunning features; it’s about subtle adaptations that might not even be noticeable to most people, she says.

The art gallery study is just the first step for researchers like Zisch and Spiers, who plan to explore other aspects of neuroscience and architecture in more simulated environments at PEARL. The team won’t have results for a while yet. But it’s a fascinating start. Watch this space.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Brain-monitoring technology has come a long way, and tech designed to read our minds and probe our memories is already being used. Futurist and legal ethicist Nita Farahany explained why we need laws to protect our cognitive liberty in a previous edition of The Checkup.

Listening in on the brain can reveal surprising insights into how this mysterious organ works. One team of neuroscientists found that our brains seem to oscillate between states of order and chaos.

Last year, MIT Technology Review published our design issue of the magazine. If you’re curious, this piece on the history and future of the word “design,” by Nicholas de Monchaux, head of architecture at MIT, might be a good place to start

Design covers much more than buildings, of course. Designers are creating new ways for users of prosthetic devices to feel more comfortable in their own skin—some of which have third thumbs, spikes, or “superhero skins.”

Achim Menges is an architect creating what he calls “self-shaping” structures with wood, which can twist and curve with changes in humidity. His approach is a low-energy way to make complex curved architectures, Menges told John Wiegand.

From around the web

Scientists are meant to destroy research samples of the poliovirus, as part of efforts to eradicate the disease it causes. But lab leaks of the virus may be more common than we’d like to think. (Science)

Neurofeedback allows people to watch their own brain activity in real time, and learn to control it. It could be a useful way to combat the impacts of stress. (Trends in Neurosciences)

Microbes, some of which cause disease in people, can travel over a thousand miles on wind, researchers have shown. Some appear to be able to survive their journey. (The Guardian)

Is the X chromosome involved in Alzheimer’s disease? A study of over a million people suggests so. (JAMA Neurology)

A growing number of men are paying thousands of dollars a year for testosterone therapies that are meant to improve their physical performance. But some are left with enlarged breasts, shrunken testicles, blood clots, and infertility. (The Wall Street Journal)

Maybe you will be able to live past 122

The UK’s Office of National Statistics has an online life expectancy calculator. Enter your age and sex, and the website will, using national averages, spit out the age at which you can expect to pop your clogs. For me, that figure is coming out at 88 years old.

That’s not too bad, I figure, given that globally, life expectancy is around 73. But I’m also aware that this is a lowball figure for many in the longevity movement, which has surged in recent years. When I interview a scientist, doctor, or investor in the field, I always like to ask about personal goals. I’ve heard all sorts. Some have told me they want an extra decade of healthy life. Many want to get to 120, close to the current known limit of human age. Others have told me they want to stick around until they’re 200. And some have told me they don’t want to put a number on it; they just want to live for as long as they possibly can—potentially indefinitely.

How far can they go? This is a good time to ask the question. The longevity scene is having a moment, thanks to a combination of scientific advances, public interest, and an unprecedented level of investment. A few key areas of research suggest that we might be able to push human life spans further, and potentially reverse at least some signs of aging.

Take, for example, the concept of cellular reprogramming. Nobel Prize–winning research has shown it is possible to return adult cells to a “younger” state more like that of a stem cell. Billions of dollars have been poured into trying to transform this discovery into a therapy that could wind back the age of a person’s cells and tissues, potentially restoring some elements of youth.

Many other avenues are being explored, including a diabetes drug that could have broad health benefits; drugs based on a potential anti-aging compound discovered in the soil of Rapa Nui (Easter Island); attempts to rejuvenate the immune system; gene therapies designed to boost muscle or extend the number of times our cells can divide; and many, many more. Other researchers are pursuing ways to clear out the aged, worn-out cells in our bodies. These senescent cells appear to pump out chemicals that harm the surrounding tissues. Around eight years ago, scientists found that mice cleared of senescent cells lived 25% longer than untreated ones. They also had healthier hearts and took much longer to develop age-related diseases like cancer and cataracts. They even looked younger.

Unfortunately, human trials of senolytics—drugs that target senescent cells—haven’t been quite as successful. Unity Biotechnology, a company cofounded by leading researchers in the field, tested such a drug in people with osteoarthritis. In 2020, the company officially abandoned that drug after it was found to be no better than a placebo in treating the condition.

That doesn’t mean we won’t one day figure out how to treat age-related diseases, or even aging itself, by targeting senescent cells. But it does illustrate how complicated the biology of aging is. Researchers can’t even agree on what the exact mechanisms of aging are and which they should be targeting. Debates continue to rage over how long it’s possible for humans to live—and whether there is a limit at all.

Still, we are getting better at testing potential therapies in more humanlike models. We’re finding new and improved ways to measure the aging process itself. The X Prize is offering $101 million to researchers who find a way to restore at least 10 years of “muscle, cognitive, and immune function” in 65- to 80-year-olds with a treatment that takes one year or less to administer. Given that the competition runs for seven years, it’s a tall order; Jamie Justice, executive director of the X Prize’s health-span domain, told me she initially fought back on the challenging goal and told the organization’s founder, Peter Diamandis, there was “no way” researchers could achieve it. But we’ve seen stranger things in science. 

Some people are banking on this kind of progress. Not just the billionaires who have already spent millions of dollars and a significant chunk of their time on strategies that might help them defy aging, but also the people who have opted for cryopreservation. There are hundreds of bodies in storage—bodies of people who believed they might one day be reanimated. For them, the hopes are slim. I asked Justice whether she thought they stood a chance at a second life. “Honest answer?” she said. “No.”

It looks likely that something will be developed in the coming decades that will help us live longer, in better health. Not an elixir for eternal life, but perhaps something—or a few somethings—that can help us stave off some of the age-related diseases that tend to kill a lot of us. Such therapies may well push life expectancy up. I don’t feel we need a massive increase, but perhaps I’ll feel differently when I’m approaching 88.

The ONS website gives me a one in four chance of making it to 96, and a one in 10 chance of seeing my 100th birthday. To me, that sounds like an impressive number—as long as I get there in semi-decent health.

I’d still be a long way from the current record of 122 years. But it might just be that there are some limitations we must simply come to terms with—as individuals and in society at large. In a 2017 paper making the case for a limit to the human life span, scientists Jan Vijg and Eric Le Bourg wrote something that has stuck with me—and is worth bearing in mind when considering the future of human longevity: “A species does not need to live for eternity to thrive.” 

Tech that measures our brainwaves is 100 years old. How will we be using it 100 years from now?

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, we’re acknowledging a special birthday. It’s 100 years since EEG (electroencephalography) was first used to measure electrical activity in a person’s brain. The finding was revolutionary. It helped people understand that epilepsy was a neurological disorder as opposed to a personality trait, for one thing (yes, really).

The fundamentals of EEG have not changed much over the last century—scientists and doctors still put electrodes on people’s heads to try to work out what’s going on inside their brains. But we’ve been able to do a lot more with the information that’s collected.

We’ve been able to use EEG to learn more about how we think, remember, and solve problems. EEG has been used to diagnose brain and hearing disorders, explore how conscious a person might be, and even allow people to control devices like computers, wheelchairs, and drones.

But an anniversary is a good time to think about the future. You might have noticed that my colleagues and I are currently celebrating 125 years of MIT Technology Review by pondering the technologies the next 125 years might bring. What will EEG allow us to do 100 years from now?

First, a quick overview of what EEG is and how it works. EEG involves placing electrodes on the top of someone’s head, collecting electrical signals from brainwaves, and feeding these to a computer for analysis. Today’s devices often resemble swimming caps. They’re very cheap compared with other types of brain imaging technologies, such as fMRI scanners, and they’re pretty small and portable.

The first person to use EEG in people was Hans Berger, a German psychiatrist who was fascinated by the idea of telepathy. Berger developed EEG as a tool to measure “psychic energy,” and he carried out his early research—much of it on his teenage son—in secret, says Faisal Mushtaq, a cognitive neuroscientist at the University of Leeds in the UK. Berger was, and remains, a controversial figure owing to his unclear links with Nazi regime, Mushtaq tells me.

But EEG went on to take the neuroscience world by storm. It has become a staple of neuroscience labs, where it can be used on people of all ages, even newborns. Neuroscientists use EEG to explore how babies learn and think, and even what makes them laugh. In my own reporting, I’ve covered the use of EEG to understand the phenomenon of lucid dreaming, to reveal how our memories are filed away during sleep, and to allow people to turn on the TV by thought alone.   

EEG can also serve as a portal into the minds of people who are otherwise unable to communicate. It has been used to find signs of consciousness in people with unresponsive wakefulness syndrome (previously called a “vegetative state”). The technology has also allowed people paralyzed with amyotrophic lateral sclerosis (ALS) to communicate by thought and tell their family members they are happy.

So where do we go from here? Mushtaq, along with Pedro Valdes-Sosa at the University of Electronic Science and Technology of China in Chengdu and their colleagues, put the question to 500 people who work with EEG, including neuroscientists, clinical neurophysiologists, and brain surgeons. Specifically, with the help of ChatGPT, the team generated a list of predictions, which ranged from the very likely to the somewhat fanciful. Each of the 500 survey responders was asked to estimate when, if at all, each prediction might be likely to pan out.  

Some of the soonest breakthroughs will be in sleep analysis, according to the responders. EEG is already used to diagnose and monitor sleep disorders—but this is set to become routine practice in the next decade. Consumer EEG is also likely to take off in the near future, potentially giving many of us the opportunity to learn more about our own brain activity, and how it corresponds with our wellbeing. “Perhaps it’s integrated into a sort of baseball cap that you wear as you walk around, and it’s connected to your smartphone,” says Mushtaq. EEG caps like these have already been trialed on employees in China and used to monitor fatigue in truck drivers and mining workers, for example.

For the time being, EEG communication is limited to the lab or hospital, where studies focus on the technology’s potential to help people who are paralyzed, or who have disorders of consciousness. But that is likely to change in the coming years, once more clinical trials have been completed. Survey respondents think that EEG could become a primary tool of communication for individuals like these in the next 20 years or so.

At the other end of the scale is what Mushtaq calls the “more fanciful” application—the idea of using EEG to read people’s thoughts, memories, and even dreams.

Mushtaq thinks this is a “relatively crazy” prediction—one that’s a long, long way from coming to pass considering we don’t yet have a clear picture of how and where our memories are formed. But it’s not completely science fiction, and some respondents predict the technology could be with us in around 60 years.

Artificial intelligence will probably help neuroscientists squeeze more information from EEG recordings by identifying hidden patterns in brain activity. And it is already being used to turn a person’s thoughts into written words, albeit with limited accuracy. “We’re on the precipice of this AI revolution,” says Mushtaq.

These kinds of advances will raise questions over our right to mental privacy and how we can protect our thoughts. I talked this over with Nita Farahany, a futurist and legal ethicist at Duke University in Durham, North Carolina, last year. She told me that while brain data itself is not thought, it can be used to make inferences about what a person is thinking or feeling. “The only person who has access to your brain data right now is you, and it is only analyzed in the internal software of your mind,” she said. “But once you put a device on your head … you’re immediately sharing that data with whoever the device manufacturer is, and whoever is offering the platform.”

Valdes-Sosa is optimistic about the future of EEG. Its low cost, portability, and ease of use make the technology a prime candidate for use in poor countries with limited resources, he says; he has been using it in his research since 1969. (You can see what his set up looked like in 1970 in the image below!) EEG should be used to monitor and improve brain health around the world, he says: “It’s difficult … but I think it could happen in the future.” 

photo from the 1970s of two medical professionals facing an eeg machine
PEDRO VALDES-SOSA

Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

You can read the full interview with Nita Farahany, in which she describes some decidedly creepy uses of brain data, here.

Ross Compton’s heart data was used against him when he was accused of burning down his home in Ohio in 2016. Brain data could be used in a similar way. One person has already had to hand over recordings from a brain implant to law enforcement officials after being accused of assaulting a police officer. (It turned out that person was actually having a seizure at the time.) I looked at some of the other ways your brain data could be used against you in a previous edition of The Checkup.

Teeny-tiny versions of EEG caps have been used to measure electrical activity in brain organoids (clumps of neurons that are meant to represent a full brain), as my colleague Rhiannon Williams reported a couple of years ago.

EEG has also been used to create a “brain-to-brain network that allows three people to collaborate on a game of Tetris by thought alone.

Some neuroscientists are using EEG to search for signs of consciousness in people who seem completely unresponsive. One team found such signs in a 21-year-old woman who had experienced a traumatic brain injury. “Every clinical diagnostic test, experimental and established, showed no signs of consciousness,” her neurophysiologist told MIT Technology Review. After a test that involved EEG found signs of consciousness, the neurophysiologist told rehabilitation staff to “search everywhere and find her!” They did, about a month later. With physical and drug therapy, she learned to move her fingers to answer simple questions.

From around the web

Food waste is a problem. This Japanese company is fermenting it to create sustainable animal feed. In case you were wondering, the food processing plant smells like a smoothie, and the feed itself tastes like sour yogurt. (BBC Future)

The pharmaceutical company Gilead Sciences is accused of “patent hopping”—having dragged its feet to bring a safer HIV treatment to market while thousands of people took a harmful one. The company should be held accountable, argues a cofounder of PrEP4All, an advocacy organization promoting a national HIV prevention plan. (STAT)

Anti-suicide nets under San Francisco’s Golden Gate Bridge are already saving lives, perhaps by acting as a deterrent. (The San Francisco Standard)

Genetic screening of newborn babies could help identify treatable diseases early in life. Should every baby be screened as part of a national program? (Nature Medicine)

Is “race science”—which, it’s worth pointing out, is nothing but pseudoscience—on the rise, again? The far right’s references to race and IQ make it seem that way. (The Atlantic)

As part of our upcoming magazine issue celebrating 125 years of MIT Technology Review and looking ahead to the next 125, my colleague Antonio Regalado explores how the gene-editing tool CRISPR might influence the future of human evolution. (MIT Technology Review)

Aging hits us in our 40s and 60s. But well-being doesn’t have to fall off a cliff.

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I came across research that suggests aging hits us in waves. You might feel like you’re on a slow, gradual decline, but, at the molecular level, you’re likely to be hit by two waves of changes, according to the scientists behind the work. The first one comes in your 40s. Eek.

For the study, Michael Snyder at Stanford University and his colleagues collected a vast amount of biological data from 108 volunteers aged 25 to 75, all of whom were living in California. Their approach was to gather as much information as they could and look for age-related patterns afterward.

This approach can lead to some startling revelations, including the one about the impacts of age on 40-year-olds (who, I was horrified to learn this week, are generally considered “middle-aged”). It can help us answer some big questions about aging, and even potentially help us find drugs to counter some of the most unpleasant aspects of the process.

But it’s not as simple as it sounds. And midlife needn’t involve falling off a cliff in terms of your well-being. Let’s explore why.

First, the study, which was published in the journal Nature Aging on August 14. Snyder and his colleagues collected a real trove of data on their volunteers, including on gene expression, proteins, metabolites, and various other chemical markers. The team also swabbed volunteers’ skin, stool, mouths, and noses to get an idea of the microbial communities that might be living there.

Each volunteer gave up these samples every few months for a median period of 1.7 years, and the team ended up with a total of 5,405 samples, which included over 135,000 biological features. “The idea is to get a very complete picture of people’s health,” says Snyder.

When he and his colleagues analyzed the data, they found that around 7% of the molecules and microbes measured changes gradually over time, in a linear way. On the other hand, 81% of them changed at specific life stages. There seem to be two that are particularly important: one at around the age of 44, and another around the age of 60.

Some of the dramatic changes at age 60 seem to be linked to kidney and heart function, and diseases like atherosclerosis, which narrows the arteries. That makes sense, given that our risks of developing cardiovascular diseases increase dramatically as we age—around 40% of 40- to 59-year-olds have such disorders, and this figure rises to 75% for 60- to 79-year-olds.

But the changes that occur around the age of 40 came as a surprise to Snyder. He says that, on reflection, they make intuitive sense. Many of us start to feel a bit creakier once we hit 40, and it can take longer to recover from injuries, for example.

Other changes suggest that our ability to metabolize lipids and alcohol shifts when we reach our 40s, though it’s hard to say why, for a few reasons. 

First, it’s not clear if a change in alcohol metabolism, for example, means that we are less able to break down alcohol, or if people are just consuming less of it when they’re older.

This gets us to a central question about aging: Is it an inbuilt program that sets us on a course of deterioration, or is it merely a consequence of living?

We don’t have an answer to that one, yet. It’s probably a combination of both. Our bodies are exposed to various environmental stressors over time. But also, as our cells age, they are less able to divide, and clear out the molecular garbage they accumulate over time.

It’s also hard to tell what’s happening in this study, because the research team didn’t measure more physiological markers of aging, such as muscle strength or frailty, says Colin Selman, a biogerontologist at the University of Glasgow in Scotland.

There’s another, perhaps less scientific, question that comes to mind. How worried should we be about these kinds of molecular changes? I’m approaching 40—should I panic? I asked Sara Hägg, who studies the molecular epidemiology of aging at the Karolinska Institute in Stockholm, Sweden. “No,” was her immediate answer.

While Snyder’s team collected a vast amount of data, it was from a relatively small number of people over a relatively short period of time. None of them were tracked for the two or three decades you’d need to see the two waves of molecular changes occur in a person.

“This is an observational study, and they compare different people,” Hägg told me. “There is absolutely no evidence that this is going to happen to you.” After all, there’s a lot that can happen in a person’s life over 20 or 30 years. They might take up a sport. They might quit smoking or stop eating meat.  

However, the findings do support the idea that aging is not a linear process.

“People have always suggested that you’re on this decline in your life from [around the age of] 40, depressingly,” says Selman. “But it’s not quite as simple as that.”

Snyder hopes that studies like his will help reveal potential new targets for therapies that help counteract some of the harmful molecular shifts associated with aging. “People’s healthspan is 11 to 15 years shorter than their lifespan,” he says. “Ideally you’d want to live for as long as possible [in good health], and then die.”

We don’t have any such drugs yet. For now, it all comes down to the age-old advice about eating well, sleeping well, getting enough exercise, and avoiding the big no-nos like smoking and alcohol.

I happened to speak to Selman at the end of what had been a particularly difficult day, and I confessed that I was looking forward to enjoying an evening glass of wine. That’s despite the fact that research suggests that there is “no safe level” of alcohol consumption.

“A little bit of alcohol is actually quite nice,” Selman agreed. He told me about an experience he’d had once at a conference on aging. Some of the attendees were members of a society that practiced caloric restriction—the idea being that cutting your calories can boost your lifespan (we don’t yet know if this works for people). “There was a big banquet… and these people all had little scales, and were weighing their salads on the scales,” he told me. “To me, that seems like a rather miserable way to live your life.”

I’m all for finding balance between healthy lifestyle choices and those that bring me joy. And it’s worth remembering that no amount of deprivation is going to radically extend our lifespans. As Selman puts it: “We can do certain things, but ultimately, when your time’s up, your time’s up.”


Now read the rest of the Checkup

Read more from MIT Technology Review’s archive

We don’t yet have a drug that targets aging. But that hasn’t stopped a bunch of longevity clinics from cropping up, offering a range of purported healthspan-extending services for the mega-rich. Now, they’re on a quest to legitimize longevity medicine.

Speaking of the uber wealthy, I also tagged along to an event for longevity enthusiasts ready to pump millions of dollars into the search for an anti-aging therapy. It was a fascinating, albeit slightly strange, experience.

There are plenty of potential rejuvenation strategies being explored right now. But the one that has received some of the most attention—and the most investment—is cellular reprogramming. My colleague Antonio Regalado looked at the promise of the field in this feature.

Scientists are working on new ways to measure how old a person is. Not just the number of birthdays they’ve had, but how aged or close to death they are. I took one of these biological aging tests. And I wasn’t all that pleased with the result.

Is there a limit to human life? Is old age a disease? Find out in the Mortality issue of MIT Technology Review’s magazine. 

You can of course read all of these stories and many more on our new app, which can be downloaded here (for Android users) or here (for Apple users).

From around the web

Mpox, the disease that has been surging in the Democratic Republic of the Congo and nearby countries, now constitutes a public health emergency of international concern, according to the World Health Organization. 

“The detection and rapid spread of a new clade [subgroup] of mpox in Eastern DRC, its detection in neighboring countries that had not previously reported mpox, and the potential for further spread within Africa and beyond is very worrying,” WHO director general Tedros Adhanom Ghebreyesus said in a briefing shared on X. “It’s clear that a coordinated international response is essential to stop these outbreaks and save lives.” (WHO)

Prosthetic limbs are often branded with company logos. For users of the technology, it can feel like a tattoo you didn’t ask for. (The Atlantic)

A testing facility in India submitted fraudulent data for more than 400 drugs to the FDA. But these drugs have not been withdrawn from the US market. That needs to be remedied, says the founder and president of a nonprofit focused on researching drug side effects. (STAT)

Antibiotics can impact our gut microbiomes. But the antibiotics given to people who undergo c-sections don’t have much of an impact on the baby’s microbiome. The way the baby is fed seems to be much more influential. (Cell Host & Microbe)

When unexpected infectious diseases show up in people, it’s not just physicians that are crucial. Veterinarian “disease detectives” can play a vital role in tracking how infections pass from animals to people, and the other way around. (New Yorker)

Watch a video showing what happens in our brains when we think

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

What does a thought look like? We can think about thoughts resulting from shared signals between some of the billions of neurons in our brains. Various chemicals are involved, but it really comes down to electrical activity. We can measure that activity and watch it back.

Earlier this week, I caught up with Ben Rapoport, the cofounder and chief science officer of Precision Neuroscience, a company doing just that. It is developing brain-computer interfaces that Rapoport hopes will one day help paralyzed people control computers and, as he puts it, “have a desk job.”

Rapoport and his colleagues have developed thin, flexible electrode arrays that can be slipped under the skull through a tiny incision. Once inside, they can sit on a person’s brain, collecting signals from neurons buzzing away beneath. So far, 17 people have had these electrodes placed onto their brains. And Rapoport has been able to capture how their brains form thoughts. He even has videos. (Keep reading to see one for yourself, below.)

Brain electrodes have been around for a while and are often used to treat disorders such as Parkinson’s disease and some severe cases of epilepsy. Those devices tend to involve sticking electrodes deep inside the brain to access regions involved in those disorders.

Brain-machine interfaces are newer. In the last couple of decades, neuroscientists and engineers have made significant progress in developing technologies that allow them to listen in on brain activity and use brain data to allow people to control computers and prosthetic limbs by thought alone.

The technology isn’t commonplace yet, and early versions could only be used in a lab setting. Scientists like Rapoport are working on new devices that are more effective, less invasive, and more practical. He and his colleagues have developed a miniature device that fits 1,024 tiny electrodes onto a sliver of ribbon-like film that’s just 20 microns thick—around a third of the width of a human eyelash.

The vast majority of these electrodes are designed to pick up brain activity. The device itself is designed to be powered by a rechargeable battery implanted under the skin in the chest, like a pacemaker. And from there, data could be transmitted wirelessly to a computer outside the body.

Unlike other needle-like electrodes that penetrate brain tissue, Rapoport says his electrode array “doesn’t damage the brain at all.” Instead of being inserted into brain tissue, the electrode arrays are arranged on a thin, flexible film, fed through a slit in the skull, and placed on the surface of the brain.

From there, they can record what the brain is doing when the person thinks. In one case, Rapoport’s team inserted their electrode array into the skull of a man who was undergoing brain surgery to treat a disease. He was kept awake during his operation so that surgeons could make sure they weren’t damaging any vital regions of his brain. And all the while, the electrodes were picking up the electrical signals from his neurons.

This is what the activity looked like:

“This is basically the brain thinking,” says Rapoport. “You’re seeing the physical manifestation of thought.”

In this video, which I’ve converted to a GIF, you can see the pattern of electrical activity in the man’s brain as he recites numbers. Each dot represents the voltage sensed by an electrode on the array on the man’s brain, over a region involved in speech. The reds and oranges represent higher voltages, while the blues and purples represent lower ones. The video has been slowed down 20-fold, because “thoughts happen faster than the eye can see,” says Rapoport.

This approach allows neuroscientists to visualize what happens in the brain when we speak—and when we plan to speak. “We can decode his intention to say a word even before he says it,” says Rapoport. That’s important—scientists hope technologies will interpret these kinds of planning signals to help some individuals communicate.

For the time being, Rapoport and his colleagues are only testing their electrodes in volunteers who are already scheduled to have brain surgery. The electrodes are implanted, tested, and removed during a planned operation. The company announced in May that the team had broken a record for the greatest number of electrodes placed on a human brain at any one time—a whopping 4,096.

Rapoport hopes the US Food and Drug Administration will approve his device in the coming months. “That will unlock … what we hope will be a new standard of care,” he says.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

Precision Neuroscience is one of a handful of companies leading the search for a new brain-computer interface. Cassandra Willyard covered the key players in a recent edition of the Checkup.

Brain implants can do more than treat disease or aid communication. They can change a person’s sense of self. This was the case for Rita Leggett, who was devastated when her implant was removed against her will. I explored whether experiences like these should be considered a breach of human rights in a piece published last year.

Ian Burkhart, who was paralyzed as a result of a diving accident, received a brain implant when he was 24 years old. Burkhart learned to use the implant to control a robotic arm and even play Guitar Hero. But funding issues and an infection meant the implant had to be removed. “When I first had my spinal cord injury, everyone said: ‘You’re never going to be able to move anything from your shoulders down again,’” Burkhart told me last year. “I was able to restore that function, and then lose it again. That was really tough.”

A couple of years ago, a brain implant allowed a locked-in man to communicate in full sentences by thought alone—a world first, the researchers claimed. He used it to ask for soup and beer, and to tell his carers “I love my cool son.”

Electrodes that stimulate the brain could be used to improve a person’s memory. The “memory prosthesis,” which has been designed to mimic the way our brains create memories, appears to be most effective in people who have poor memories to begin with.

From around the web

Do you share DNA with Ludwig van Beethoven, or perhaps a Viking? Tests can reveal genetic links, but they are not always clear, and the connections are not always meaningful or informative. (Nature)

This week marks 79 years since the United States dropped atomic bombs on Hiroshima and Nagasaki. Survivors share their stories of what it’s like to live with the trauma, stigma, and survivor’s guilt caused by the bombs—and why weapons like these must never be used again. (New York Times)

At least 19 Olympic athletes have tested positive for covid-19 in the past two weeks. The rules allow them to compete regardless. (Scientific American)

Honey contains a treasure trove of biological information, including details about the plants that supplied the pollen and the animals and insects in the environment. It can even tell you something about the bees’ “micro-bee-ota.” (New Scientist)

A personalized AI tool might help some reach end-of-life decisions—but it won’t suit everyone

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week, I’ve been working on a piece about an AI-based tool that could help guide end-of-life care. We’re talking about the kinds of life-and-death decisions that come up for very unwell people: whether to perform chest compressions, for example, or start grueling therapies, or switch off life support.

Often, the patient isn’t able to make these decisions—instead, the task falls to a surrogate, usually a family member, who is asked to try to imagine what the patient might choose if able. It can be an extremely difficult and distressing experience.  

A group of ethicists have an idea for an AI tool that they believe could help make things easier. The tool would be trained on information about the person, drawn from things like emails, social media activity, and browsing history. And it could predict, from those factors, what the patient might choose. The team describe the tool, which has not yet been built, as a “digital psychological twin.”

There are lots of questions that need to be answered before we introduce anything like this into hospitals or care settings. We don’t know how accurate it would be, or how we can ensure it won’t be misused. But perhaps the biggest question is: Would anyone want to use it?

To answer this question, we first need to address who the tool is being designed for. The researchers behind the personalized patient preference predictor, or P4, had surrogates in mind—they want to make things easier for the people who make weighty decisions about the lives of their loved ones. But the tool is essentially being designed for patients. It will be based on patients’ data and aims to emulate these people and their wishes.

This is important. In the US, patient autonomy is king. Anyone who is making decisions on behalf of another person is asked to use “substituted judgment”—essentially, to make the choices that the patient would make if able. Clinical care is all about focusing on the wishes of the patient.

If that’s your priority, a tool like the P4 makes a lot of sense. Research suggests that even close family members aren’t great at guessing what type of care their loved ones might choose. If an AI tool is more accurate, it might be preferable to the opinions of a surrogate.

But while this line of thinking suits American sensibilities, it might not apply the same way in all cultures. In some cases, families might want to consider the impact of an individual’s end-of-life care on family members, or the family unit as a whole, rather than just the patient.

“I think sometimes accuracy is less important than surrogates,” Bryanna Moore, an ethicist at the University of Rochester in New York, told me. “They’re the ones who have to live with the decision.”

Moore has worked as a clinical ethicist in hospitals in both Australia and the US, and she says she has noticed a difference between the two countries. “In Australia there’s more of a focus on what would benefit the surrogates and the family,” she says. And that’s a distinction between two English-speaking countries that are somewhat culturally similar. We might see greater differences in other places.

Moore says her position is controversial. When I asked Georg Starke at the Swiss Federal Institute of Technology Lausanne for his opinion, he told me that, generally speaking, “the only thing that should matter is the will of the patient.” He worries that caregivers might opt to withdraw life support if the patient becomes too much of a “burden” on them. “That’s certainly something that I would find appalling,” he told me.

The way we weigh a patient’s own wishes and those of their family members might depend on the situation, says Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine in Houston, Texas. Perhaps the opinions of surrogates might matter more when the case is more medically complex, or if medical interventions are likely to be futile.

Rahimzadeh has herself acted as a surrogate for two close members of her immediate family. She hadn’t had detailed discussions about end-of-life care with either of them before their crises struck, she told me.

Would a tool like the P4 have helped her through it? Rahimzadeh has her doubts. An AI trained on social media or internet search history couldn’t possibly have captured all the memories, experiences, and intimate relationships she had with her family members, which she felt put her in good stead to make decisions about their medical care.

“There are these lived experiences that are not well captured in these data footprints, but which have incredible and profound bearing on one’s actions and motivations and behaviors in the moment of making a decision like that,” she told me.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

You can read the full article about the P4, and its many potential benefits and flaws, here.

This isn’t the first time anyone has proposed using AI to make life-or-death decisions. Will Douglas Heaven wrote about a different kind of end-of-life AI—a technology that would allow users to end their own lives in a nitrogen-gas-filled pod, should they wish.

AI is infiltrating health care in lots of other ways. We shouldn’t let it make all the decisions—AI paternalism could put patient autonomy at risk, as we explored in a previous edition of The Checkup.

Technology that lets us speak to our dead relatives is already here, as my colleague Charlotte Jee found when she chatted with the digital replicas of her own parents.

What is death, anyway? Recent research suggests that “the line between life and death isn’t as clear as we once thought,” as Rachel Nuwer reported last year.

From around the web

When is someone deemed “too male” or “too female” to compete in the Olympics? A new podcast called Tested dives into the long, fascinating, and infuriating history of testing and excluding athletes on the basis of their gender and sex. (Sequencer)

There’s a dirty secret among Olympic swimmers: Everyone pees in the pool. “I’ve probably peed in every single pool I’ve swam in,” said Lilly King, a three-time Olympian for Team USA. “That’s just how it goes.” (Wall Street Journal)

When saxophonist Joey Berkley developed a movement disorder that made his hands twist into pretzel shapes, he volunteered for an experimental treatment that involved inserting an electrode deep into his brain. That was three years ago. Now he’s releasing a new suite about his experience, including a frenetic piece inspired by the surgery itself. (NPR)

After a case of mononucleosis, Jason Werbeloff started to see the people around him in an entirely new way—literally. He’s one of a small number of people for whom people’s faces morph into monstrous shapes, with bulging sides and stretching teeth, because of a rare condition called prosopometamorphopsia. (The New Yorker)  

How young are you feeling today? Your answer might depend on how active you’ve been, and how sunny it is. (Innovation in Aging)

End-of-life decisions are difficult and distressing. Could AI help?

A few months ago, a woman in her mid-50s—let’s call her Sophie—experienced a hemorrhagic stroke. Her brain started to bleed. She underwent brain surgery, but her heart stopped beating.

Sophie’s ordeal left her with significant brain damage. She was unresponsive; she couldn’t squeeze her fingers or open her eyes when asked, and she didn’t flinch when her skin was pinched. She needed a tracheostomy tube in her neck to breathe and a feeding tube to deliver nutrition directly to her stomach, because she couldn’t swallow. Where should her medical care go from there?

This difficult question was left, as it usually is in these kinds of situations, to Sophie’s family members, recalls Holland Kaplan, an internal-medicine physician at Baylor College of Medicine who was involved in Sophie’s care. But the family couldn’t agree. Sophie’s daughter was adamant that her mother would want to stop having medical treatments and be left to die in peace. Another family member vehemently disagreed and insisted that Sophie was “a fighter.” The situation was distressing for everyone involved, including Sophie’s doctors.

End-of-life decisions can be extremely upsetting for surrogates, the people who have to make those calls on behalf of another person, says David Wendler, a bioethicist at the US National Institutes of Health. Wendler and his colleagues have been working on an idea for something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want in any given situation.

The tool hasn’t been built yet. But Wendler plans to train it on a person’s own medical data, personal messages, and social media posts. He hopes it could not only be more accurate at working out what the patient would want, but also alleviate the stress and emotional burden of difficult decision-making for family members.

Wendler, along with bioethicist Brian Earp at the University of Oxford and their colleagues, hopes to start building the tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won’t be simple. Critics wonder how such a tool can ethically be trained on a person’s data, and whether life-or-death decisions should ever be entrusted to AI.

Live or die

Around 34% of people in a medical setting are considered to be unable to make decisions about their own care for various reasons. They may be unconscious, for example, or unable to reason or communicate. This figure is higher among older individuals—one study of people over 60 in the US found that 70% of those faced with important decisions about their care lacked the capacity to make those decisions themselves. “It’s not just a lot of decisions—it’s a lot of really important decisions,” says Wendler. “The kinds of decisions that basically decide whether the person is going to live or die in the near future.”

Chest compressions administered to a failing heart might extend a person’s life. But the treatment might lead to a broken sternum and ribs, and by the time the person comes around—if ever—significant brain damage may have developed. Keeping the heart and lungs functioning with a machine might maintain a supply of oxygenated blood to the other organs—but recovery is no guarantee, and the person could develop numerous infections in the meantime. A terminally ill person might want to continue trying hospital-administered medications and procedures that could offer a few more weeks or months. But someone else might want to forgo those interventions and be more comfortable at home.

Only around one in three adults in the US completes any kind of advance directive—a legal document that specifies the end-of-life care they might want to receive. Wendler estimates that over 90% of end-of-life decisions end up being made by someone other than the patient. The role of a surrogate is to make that decision based on beliefs about how the patient would want to be treated. But people are generally not very good at making these kinds of predictions. Studies suggest that surrogates accurately predict a patient’s end-of-life decisions around 68% of the time.

The decisions themselves can also be extremely distressing, Wendler adds. While some surrogates feel a sense of satisfaction from having supported their loved ones, others struggle with the emotional burden and can feel guilty for months or even years afterwards. Some fear they ended the life of their loved ones too early. Others worry they unnecessarily prolonged their suffering. “It’s really bad for a lot of people,” says Wendler. “People will describe this as one of the worst things they’ve ever had to do.”

Wendler has been working on ways to help surrogates make these kinds of decisions. Over 10 years ago, he developed the idea for a tool that would predict a patient’s preferences on the basis of characteristics such as age, gender, and insurance status. That tool would have been based on a computer algorithm trained on survey results from the general population. It may seem crude, but these characteristics do seem to influence how people feel about medical care. A teenager is more likely to opt for aggressive treatment than a 90-year-old, for example. And research suggests that predictions based on averages can be more accurate than the guesses made by family members.

In 2007, Wendler and his colleagues built a “very basic,” preliminary version of this tool based on a small amount of data. That simplistic tool did “at least as well as next-of-kin surrogates” in predicting what kind of care people would want, says Wendler.

Now Wendler, Earp and their colleagues are working on a new idea. Instead of being based on crude characteristics, the new tool the researchers plan to build will be personalized. The team proposes using AI and machine learning to predict a patient’s treatment preferences on the basis of personal data such as medical history, along with emails, personal messages, web browsing history, social media posts, or even Facebook likes. The result would be a “digital psychological twin” of a person—a tool that doctors and family members could consult to guide a person’s medical care. It’s not yet clear what this would look like in practice, but the team hopes to build and test the tool before refining it.

The researchers call their tool a personalized patient preference predictor, or P4 for short. In theory, if it works as they hope, it could be more accurate than the previous version of the tool—and more accurate than human surrogates, says Wendler. It could be more reflective of a patient’s current thinking than an advance directive, which might have been signed a decade beforehand, says Earp.

A better bet?

A tool like the P4 could also help relieve the emotional burden surrogates feel in making such significant life-or-death decisions about their family members, which can sometimes leave people with symptoms of post-traumatic stress disorder, says Jennifer Blumenthal-Barby, a medical ethicist at Baylor College of Medicine in Texas.

Some surrogates experience “decisional paralysis” and might opt to use the tool to help steer them through a decision-making process, says Kaplan. In cases like these, the P4 could help ease some of the burden surrogates might be experiencing, without necessarily giving them a black-and-white answer. It might, for example, suggest that a person was “likely” or “unlikely” to feel a certain way about a treatment, or give a percentage score indicating how likely the answer is to be right or wrong. 

Kaplan can imagine a tool like the P4 being helpful in cases like Sophie’s, where various family members might have different opinions on a person’s medical care. In those cases, the tool could be offered to these family members, ideally to help them reach a decision together.

It could also help guide decisions about care for people who don’t have surrogates. Kaplan is an internal-medicine physician at Ben Taub Hospital in Houston, a “safety net” hospital that treats patients whether or not they have health insurance. “A lot of our patients are undocumented, incarcerated, homeless,” she says. “We take care of patients who basically can’t get their care anywhere else.”

These patients are often in dire straits and at the end stages of diseases by the time Kaplan sees them. Many of them aren’t able to discuss their care, and some don’t have family members to speak on their behalf. Kaplan says she could imagine a tool like the P4 being used in situations like these, to give doctors a little more insight into what the patient might want. In such cases, it might be difficult to find the person’s social media profile, for example. But other information might prove useful. “If something turns out to be a predictor, I would want it in the model,” says Wendler. “If it turns out that people’s hair color or where they went to elementary school or the first letter of their last name turns out to [predict a person’s wishes], then I’d want to add them in.”

This approach is backed by preliminary research from Earp and his colleagues, who have started running surveys to find out how individuals might feel about using the P4. This research is ongoing, but early responses suggest that people would be willing to try the model if there were no human surrogates available. Earp says he feels the same way. He also says that if the P4 and a surrogate were to give different predictions, “I’d probably defer to the human that knows me, rather than the model.”

Not a human

Earp’s feelings betray a gut instinct many others will share: that these huge decisions should ideally be made by a human. “The question is: How do we want end-of-life decisions to be made, and by whom?” says Georg Starke, a researcher at the Swiss Federal Institute of Technology Lausanne. He worries about the potential of taking a techno-solutionist approach and turning intimate, complex, personal decisions into “an engineering issue.” 

Bryanna Moore, an ethicist at the University of Rochester, says her first reaction to hearing about the P4 was: “Oh, no.” Moore is a clinical ethicist who offers consultations for patients, family members, and hospital staff at two hospitals. “So much of our work is really just sitting with people who are facing terrible decisions … they have no good options,” she says. “What surrogates really need is just for you to sit with them and hear their story and support them through active listening and validating [their] role … I don’t know how much of a need there is for something like this, to be honest.”

Moore accepts that surrogates won’t always get it right when deciding on the care of their loved ones. Even if we were able to ask the patients themselves, their answers would probably change over time. Moore calls this the “then self, now self” problem.

And she doesn’t think a tool like the P4 will necessarily solve it. Even if a person’s wishes were made clear in previous notes, messages, and social media posts, it can be very difficult to know how you’ll feel about a medical situation until you’re in it. Kaplan recalls treating an 80-year-old man with osteoporosis who had been adamant that he wanted to receive chest compressions if his heart were to stop beating. But when the moment arrived, his bones were too thin and brittle to withstand the compressions. Kaplan remembers hearing his bones cracking “like a toothpick,” and the man’s sternum detaching from his ribs. “And then it’s like, what are we doing? Who are we helping? Could anyone really want this?” says Kaplan.

There are other concerns. For a start, an AI trained on a person’s social media posts may not end up being all that much of a “psychological twin.” “Any of us who have a social media presence know that often what we put on our social media profile doesn’t really represent what we truly believe or value or want,” says Blumenthal-Barby. And even if we did, it’s hard to know how these posts might reflect our feelings about end-of-life care—many people find it hard enough to have these discussions with their family members, let alone on public platforms.

As things stand, AI doesn’t always do a great job of coming up with answers to human questions. Even subtly altering the prompt given to an AI model can leave you with an entirely different response. “Imagine this happening for a fine-tuned large language model that’s supposed to tell you what a patient wants at the end of their life,” says Starke. “That’s scary.”

On the other hand, humans are fallible, too. Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine, thinks the P4 is a good idea, provided it is rigorously tested. “We shouldn’t hold these technologies to a higher standard than we hold ourselves,” she says.

Earp and Wendler acknowledge the challenges ahead of them. They hope the tool they build can capture useful information that might reflect a person’s wishes without violating privacy. They want it to be a helpful guide that patients and surrogates can choose to use, but not a default way to give black-and-white final answers on a person’s care.

Even if they do succeed on those fronts, they might not be able to control how such a tool is ultimately used. Take a case like Sophie’s, for example. If the P4 were used, its prediction might only serve to further fracture family relationships that are already under pressure. And if it is presented as the closest indicator of a patient’s own wishes, there’s a chance that a patient’s doctors might feel legally obliged to follow the output of the P4 over the opinions of family members, says Blumenthal-Barby. “That could just be very messy, and also very distressing, for the family members,” she says.

“What I’m most worried about is who controls it,” says Wendler. He fears that hospitals could misuse tools like the P4 to avoid undertaking costly procedures, for example. “There could be all kinds of financial incentives,” he says.

Everyone contacted by MIT Technology Review agrees that the use of a tool like the P4 should be optional, and that it won’t appeal to everyone. “I think it has the potential to be helpful for some people,” says Earp. “I think there are lots of people who will be uncomfortable with the idea that an artificial system should be involved in any way with their decision making with the stakes being what they are.”

How our genome is like a generative AI model

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

What does the genome do? You might have heard that it is a blueprint for an organism. Or that it’s a bit like a recipe. But building an organism is much more complex than constructing a house or baking a cake.

This week I came across an idea for a new way to think about the genome—one that borrows from the field of artificial intelligence. Two researchers are arguing that we should think about it as being more like a generative model, a form of AI that can generate new things.

You might be familiar with such AI tools—they’re the ones that can create text, images, or even films from various prompts. Do our genomes really work in the same way? It’s a fascinating idea. Let’s explore.

When I was at school, I was taught that the genome is essentially a code for an organism. It contains the instructions needed to make the various proteins we need to build our cells and tissues and keep them working. It made sense to me to think of the human genome as being something like a program for a human being.

But this metaphor falls apart once you start to poke at it, says Kevin Mitchell, a neurogeneticist at Trinity College in Dublin, Ireland, who has spent a lot of time thinking about how the genome works.

A computer program is essentially a sequence of steps, each controlling a specific part of development. In human terms, this would be like having a set of instructions to start by building a brain, then a head, and then a neck, and so on. That’s just not how things work.

Another popular metaphor likens the genome to a blueprint for the body. But a blueprint is essentially a plan for what a structure should look like when it is fully built, with each part of the diagram representing a bit of the final product. Our genomes don’t work this way either.

It’s not as if you’ve got a gene for an elbow and a gene for an eyebrow. Multiple genes are involved in the development of multiple body parts. The functions of genes can overlap, and the same genes can work differently depending on when and where they are active. It’s far more complicated than a blueprint.

Then there’s the recipe metaphor. In some ways, this is more accurate than the analogy of a blueprint or program. It might be helpful to think about our genes as a set of ingredients and instructions, and to bear in mind that the final product is also at the mercy of variations in the temperature of the oven or the type of baking dish used, for example. Identical twins are born with the same DNA, after all, but they are often quite different by the time they’re adults.

But the recipe metaphor is too vague, says Mitchell. Instead, he and his colleague Nick Cheney at the University of Vermont are borrowing concepts from AI to capture what the genome does. Mitchell points to generative AI models like Midjourney and DALL-E, both of which can generate images from text prompts. These models work by capturing elements of existing images to create new ones.

Say you write a prompt for an image of a horse. The models have been trained on a huge number of images of horses, and these images are essentially compressed to allow the models to capture certain elements of what you might call “horsiness.” The AI can then construct a new image that contains these elements.

We can think about genetic data in a similar way. According to this model, we might consider evolution to be the training data. The genome is the compressed data—the set of information that can be used to create the new organism. It contains the elements we need, but there’s plenty of scope for variation. (There are lots more details about the various aspects of the model in the paper, which has not yet been peer-reviewed.)

Mitchell thinks it’s important to get our metaphors in order when we think about the genome. New technologies are allowing scientists to probe ever deeper into our genes and the roles they play. They can now study how all the genes are expressed in a single cell, for example, and how this varies across every cell in an embryo.

“We need to have a conceptual framework that will allow us to make sense of that,” says Mitchell. He hopes that the concept will aid the development of mathematical models that might help us better understand the intricate relationships between genes and the organisms they end up being part of—in other words, exactly how components of our genome contribute to our development.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

Last year, researchers built a new human genome reference designed to capture the diversity among us. They called it the “pangenome,” as Antonio Regalado reported.

Generative AI has taken the world by storm. Will Douglas Heaven explored six big questions that will determine the future of the technology.

A Disney director tried to use AI to generate a soundtrack in the style of Hans Zimmer. It wasn’t as good as the real thing, as Melissa Heikkilä found.

Melissa has also reported on how much energy it takes to create an image using generative AI. Turns out it’s about the same as charging your phone. 

What is AI? No one can agree, as Will found in his recent deep dive on the topic.

From around the web

Evidence from more than 1,400 rape cases in Maryland, some from as far back as 1977, are set to be processed by the end of the year, thanks to a new law. The state still has more than 6,000 untested rape kits. (ProPublica)

How well is your brain aging? A new tool has been designed to capture a person’s brain age based on an MRI scan, and which accounts for the possible effects of traumatic brain injuries. (NeuroImage)

Iran has reported the country’s first locally acquired cases of dengue, a viral infection spread by mosquitoes. There are concerns it could spread. (WHO)

IVF is expensive, and add-ons like endometrial scratching (which literally involves scratching the lining of the uterus) are not supported by strong evidence. Is the fertility industry profiting from vulnerability? (The Lancet)

Up to 2 million Americans are getting their supply of weight loss drugs like Wegovy or Zepbound from compounding pharmacies. They’re a fraction of the price of brand-name Big Pharma drugs, but there are some safety concerns. (KFF Health News)

Why we need safeguards against genetic discrimination

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

A couple of years ago, I spat into a little plastic tube, stuck it in the post, and waited for a company to analyze markers on my DNA to estimate how biologically old I am. It’s not the first time I’ve shared my genetic data for a story. Over a decade ago, I shared a DNA sample with a company that promised to tell me about my ancestry.

Of course, I’m not the only one. Tens of millions of people have shipped their DNA off to companies offering to reveal clues about their customers’ health or ancestry, or even to generate tailored diet or exercise advice. And then there are all the people who have had genetic tests as part of their clinical care, under a doctor’s supervision. Add it all together, and there’s a hell of a lot of genetic data out there.

It isn’t always clear how secure this data is, or who might end up getting their hands on it—and how that information might affect people’s lives. I don’t want my insurance provider or my employer to make decisions about my future on the basis of my genetic test results, for example. Scientists, ethicists and legal scholars aren’t clear on the matter either. They are still getting to grips with what genetic discrimination entails—and how we can defend against it.

If we’re going to protect ourselves from genetic discrimination, we first have to figure out what it is. Unfortunately, no one has a good handle on how widespread it is, says Yann Joly, director of the Centre of Genomics and Policy at McGill University in Quebec. And that’s partly because scientists keep defining it in different ways. In a paper published last month, Joly and his colleagues listed 12 different definitions that have been used in various studies since the 1990s. So what is it?

“I see genetic discrimination as a child of eugenics practices,” says Joly. Modern eugenics, which took off in the late 19th century, was all about limiting the ability of some people to pass on their genes to future generations. Those who were considered “feeble minded” or “mentally defective” could be flung into institutions, isolated from the rest of the population, and forced or coerced into having procedures that left them unable to have children. Disturbingly, some of these practices have endured. In the fiscal years 2005-2006 and 2012-2013, 144 women in California’s prisons were sterilized—many without informed consent.

These cases are thankfully rare. In recent years, ethicists and policymakers have been more worried about the potential misuse of genetic data by health-care and insurance providers. There have been instances in which people have been refused health insurance or life insurance on the basis of a genetic result, such as one that predicts the onset of Huntington’s disease. (In the UK, where I live, life insurance providers are not meant to ask for a genetic test or use the results of one—unless the person has tested positive for Huntington’s.)

Joly is collecting reports of suspected discrimination in his role at the Genetic Discrimination Observatory, a network of researchers working on the issue. He tells me that in one recent report, a woman wrote about her experience after she had been referred to a new doctor. This woman had previously taken a genetic test that revealed she would not respond well to certain medicines. Her new doctor told her he would only take her on as a patient if she first signed a waiver releasing him of any responsibility over her welfare if she didn’t follow the advice generated by her genetic test.

“It’s unacceptable,” says Joly. “Why would you sign a waiver because of a genetic predisposition? We’re not asking people with cancer to [do so]. As soon as you start treating people differently because of genetic factors … that’s genetic discrimination.”

Many countries have established laws to protect people from these kinds of discrimination. But these laws, too, can vary hugely both when it comes to defining what genetic discrimination is and to how they safeguard against it. The law in Canada focuses on DNA, RNA, and chromosome tests, for example. But you don’t always need such a test to know if you’re at risk for a genetic disease. A person might have a family history of a disease or already be showing symptoms of it.

And then there are the newer technologies. Take, for example, the kind of test that I took to measure my biological age. Many aging tests measure either chemical biomarkers in the body or epigenetic markers on the DNA—not necessarily the DNA itself. These tests are meant to indicate how close a person is to death. You might not want your life insurance provider to know or act on the results of those, either.

Joly and his colleagues have come up with a new definition. And they’ve kept it broad. “The narrower the definition, the easier it is to get around it,” he says. He wanted to avoid excluding the experiences of any people who feel they’ve experienced genetic discrimination. Here it is:

“Genetic discrimination involves an individual or a group being negatively treated, unfairly profiled or harmed, relative to the rest of the population, on the basis of actual or presumed genetic characteristics.

It will be up to policymakers to decide how to design laws around genetic discrimination. And it won’t be simple. The laws may need to look different in different countries, depending on what technologies are available and how they are being used. Perhaps some governments will want to ensure that residents have access to technologies, while other may choose to limit access. In some cases, a health-care provider may need to make decisions about a person’s care based on their genetic results.

In the meantime, Joly has advice for anyone worried about genetic discrimination. First, don’t let such concerns keep you from having a genetic test that you might need for your own health. As things stand, the risk of being discriminated against on the basis of these tests is still quite small.

And when it comes to consumer genetic testing, it’s worth looking closely at the company’s terms and conditions to find out how your data might be shared or used. It is also useful to look up the safeguarding laws in your own country or state, which can give you a good idea of when you’re within your rights to refuse to share your data.

Shortly after I received the results from my genetic tests, I asked the companies involved to delete my data. It’s not a foolproof approach—last year, hackers stole personal data on 6.9 million 23andMe customers—but at least it’s something. Just this week I was offered yet another genetic test. I’m still thinking on it.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

As of 2019, more than 26 million people had undertaken a consumer genetic test, as my colleague Antonio Regalado found. The number is likely to have grown significantly since then.
 
Some companies say they can build a picture of what a person looks like on the basis of DNA alone. The science is questionable, as Tate Ryan-Mosley found when she covered one such company.
 
The results of a genetic test can have profound consequences, as Golda Arthur found when a test revealed she had a genetic mutation that put her at risk of ovarian cancer. Arthur, whose mother developed the disease, decided to undergo the prophylactic removal of her ovaries and fallopian tubes. 
 
Tests that measure biological age were selected by readers as our 11th breakthrough technology of 2022. You can read more about them here.
 
The company that gave me an estimate of my biological age later reanalyzed my data (before I had deleted it). That analysis suggested that my brain and liver were older than they should be. Great.

From around the web:

Over the past few decades, doctors have implanted electrodes deep into the brains of a growing number of people, usually to treat disorders like epilepsy and Parkinson’s disease. We still don’t really know how they work, or how long they last. (Neuromodulation)

A ban on female genital mutilation will be upheld in the Gambia following a vote by the country’s National Assembly. The decision “reaffirm[s the country’s] commitments to human rights, gender equality, and protecting the health and well-being of girls and women,” directors of UNICEF, UNFPA, WHO, UN Women, and the UN High Commissioner for Human Rights said in a joint statement. (WHO)

Weight-loss drugs that work by targeting the GLP-1 receptor, like Wegovy and Saxena, are in high demand—and there’s not enough to go around. Other countries could follow Switzerland’s lead to make the drugs more affordable and accessible, but only for the people who really need them. (JAMA Internal Medicine)

J.D. Vance, Donald Trump’s running mate, has ties to the pharmaceutical industry and has an evolving health-care agenda. (STAT)

Psilocybin, the psychedelic compound in magic mushrooms, can disrupt the way regions of our brains communicate with each other. And the effect can last for weeks. (The Guardian)

IVF alone can’t save us from a looming fertility crisis

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

I’ve just learned that July 11 is World Population Day. There are over 8 billion of us on the planet, and there’ll probably be 8.5 billion of us by 2030. We’re continually warned about the perils of overpopulation and the impact we humans are having on our planet. So it seems a bit counterintuitive to worry that, actually, we’re not reproducing enough.

But plenty of scientists are incredibly worried about just that. Improvements in health care and sanitation are helping us all lead longer lives. But we’re not having enough children to support us as we age. Fertility rates are falling in almost every country.

But wait! We have technologies to solve this problem! IVF is helping to bring more children into the world than ever, and it can help compensate for the fertility problems faced by older parents! Unfortunately, things aren’t quite so simple. Research suggests that these technologies can only take us so far. If we want to make real progress, we also need to work on gender equality.

Researchers tend to look at fertility in terms of how many children the average woman has in her lifetime. To maintain a stable population, this figure, known as the total fertility rate (TFR), needs to be around 2.1.

But this figure has been falling over the last 50 years. In Europe, for example, women born in 1939 had a TFR of 2.3—but the figure has dropped to 1.7 for women born in 1981 (who are 42 or 43 years old by now). “We can summarize [the last 50 years] in three words: ‘declining,’ ‘late,’ and ‘childlessness,’” Gianpiero Dalla Zuanna, a professor of demography at the University of Padua in Italy, told an audience at the annual meeting of the European Society of Human Reproduction and Embryology earlier this week.

There are a lot of reasons behind this decline. Around one in six people is affected by infertility, and globally, many people aren’t having as many children as they would like. On the other hand, more people are choosing to live child-free. Others are delaying starting a family, perhaps because they face soaring living costs and have been unable to afford their own homes. Some hesitate to have children because they are concerned about the future. With the ongoing threat of global wars and climate change, who can blame them? 

There are financial as well as social consequences to this fertility crisis. We’re already seeing fewer young people supporting a greater number of older ones. And it’s not sustainable.

“Europe today has 10% of the population, 20% of gross domestic product, and 50% of the welfare expense of the world,” Dalla Zuanna said at the meeting. Twenty years from now, there will be 20% fewer people of reproductive age than there are today, he warned.

It’s not just Europe that will be affected. The global TFR in 2021 was 2.2—less than half the figure in 1950, when it was 4.8. By one recent estimate, the global fertility rate is declining at a rate of 1.1% per year. Some countries are facing especially steep declines: In 2021, the TFR in South Korea was just 0.8—well below the 2.1 needed to maintain the population. If this decline continues, we can expect the global TFR to hit 1.83 by 2050 and 1.59 by 2100.

So what’s the solution? Fertility technologies like IVF and egg freezing have been touted as one potential remedy. More people than ever are using these technologies to conceive. An IVF baby is born somewhere in the world every 35 seconds. And IVF can indeed help us overcome some fertility issues, including those that can arise for people starting a family after the age of 35. IVF is already involved in 5% to 10% of births in high-income countries. “IVF has got to be our solution, you would think,” said Georgina Chambers, who directs the National Perinatal Epidemiology and Statistics Unit at UNSW Sydney in Australia, in another talk at ESHRE.

Unfortunately, technology is unlikely to solve the fertility crisis anytime soon, as Chambers’s own research shows. A handful of studies suggest that the use of assisted reproductive technologies (ART) can only increase the total fertility rate of a country by around 1% to 5%. The US sits at the lower end of this scale—it is estimated that in 2020, the use of ART increased the fertility rate by about 1.3%. In Australia, however, ART boosted the fertility rate by 5%.

Why the difference? It all comes down to accessibility. IVF can be prohibitively expensive in the US—without insurance covering the cost, a single IVF cycle can cost around half a person’s annual disposable income. Compare that to Australia, where would-be parents get plenty of government support, and an IVF cycle costs just 6% of the average annual disposable income.

In another study, Chambers and her colleagues have found that ART can help restore fertility to some extent in women who try to have children later in life. It’s difficult to be precise here, because it’s hard to tell whether some of the births that followed IVF would have happened eventually without the technology.

Either way, IVF and other fertility technologies are not a cure-all. And overselling them as such risks encouraging people to further delay starting a family, says Chambers. There are other ways to address the fertility crisis.

Dalla Zuanna and his colleague Maria Castiglioni believe that countries with low fertility rates, like their home country Italy, need to boost the number of people of reproductive age. “The only possibility [of achieving this] in the next 20 years is to increase immigration,” Castiglioni told an audience at ESHRE.

Several countries have used “pronatalist” policies to encourage people to have children. Some involve financial incentives: Families in Japan are eligible for one-off payments and monthly allowances for each child,as part of a scheme that was recently extended. Australia has implemented a similar “baby bonus.”

“These don’t work,” Chambers said. “They can affect the timing and spacing of births, but they are short-lived. And they are coercive: They negatively affect gender equity and reproductive and sexual rights.”

But family-friendly policies can work. In the past, the fall in fertility rates was linked to women’s increasing participation in the workforce. That’s not the case anymore. Today, higher female employment rates are linked to higher fertility rates, according to Chambers. “Fertility rises when women combine work and family life on an equal footing with men,” she said at the meeting. Gender equality, along with policies that support access to child care and parental leave, can have a much bigger impact.

These policies won’t solve all our problems. But we need to acknowledge that technology alone won’t solve the fertility crisis. And if the solution involves improving gender equality, surely that’s a win-win.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive:

My colleague Antonio Regalado discussed how reproductive technology might affect population decline with Martin Varsavsky, director of the Prelude Fertility network of clinics, in a roundtable on the future of families earlier this year.

There are new fertility technologies on the horizon. I wrote about the race to generate lab-grown sperm and eggs from adult skin cells, for example. Scientists have already created artificial eggs and sperm from mouse cells and used them to create mouse pups. Artificial human sex cells are next.

Advances like these could transform the way we understand parenthood. Some researchers believe we’re not far being able to create babies with multiple genetic parents or none at all, as I wrote in a previous edition of The Checkup.

Elizabeth Carr was America’s first IVF baby when she was born in 1981. Now she works at a company that offers genetic tests for embryos, enabling parents to choose those with the highest health scores.

Some people are already concerned about maintaining human populations beyond planet Earth. The Dutch entrepreneur Egbert Edelbroek wants to try IVF in space. “Humanity needs a backup plan,” he told Scott Solomon in October last year. “If you want to be a sustainable species, you want to be a multiplanetary species.”

We have another roundtable discussion coming up with Antonio later this month. You can join him for a discussion about CRISPR and the future of gene editing. “CRISPR Babies: Six years later” takes place on Thursday, July 25, and is a subscriber-only online event. You can register for free.

From around the web

When a Bitcoin mining facility moved into the Granbury area in Texas, local residents started complaining of strange new health problems. They believe the noisy facility might be linked to their migraines, panic attacks, heart palpitations, chest pain, and hypertension. (Time)

In the spring of 1997, 20 volunteers agreed to share their DNA for the Human Genome Project, an ambitious effort to publish a reference human genome. They were told researchers expected that “no more than 10% of the eventual DNA sequence will have been obtained from [each person’s] DNA.” But when the draft was published in 2001, nearly 75% of it came from just one person. Ashley Smart reports on the ethical questions surrounding the project. (Undark)

How can you make cultured meat taste more like the real thing? Scientists have developed “flavor scaffolds” that can release a meaty taste when cultured meat is cooked. The resulting product looks like a meaty pink jelly. Bon appétit! (Nature)

Doctors can continue their medical education by taking courses throughout their careers. Some of these are funded by big tobacco companies. They really shouldn’t be, argue these doctors from Stanford and the University of California. (JAMA)

“Skin care = brain care”? Maybe, if you believe the people behind the burgeoning industry of neurocosmetics. (The Atlantic)

❌