Reading view

There are new articles available, click to refresh the page.

Meet the radio-obsessed civilian shaping Ukraine’s drone defense

Serhii “Flash” Beskrestnov hates going to the front line. The risks terrify him. “I’m really not happy to do it at all,” he says. But to perform his particular self-appointed role in the Russia-Ukraine war, he believes it’s critical to exchange the relative safety of his suburban home north of the capital for places where the prospect of death is much more immediate. “From Kyiv,” he says, “nobody sees the real situation.”

So about once a month, he drives hundreds of kilometers east in a homemade mobile intelligence center: a black VW van in which stacks of radio hardware connect to an array of antennas on the roof that stand like porcupine quills when in use. Two small devices on the dash monitor for nearby drones. Over several days at a time, Flash studies the skies for Russian radio transmissions and tries to learn about the problems facing troops in the fields and in the trenches.

He is, at least in an unofficial capacity, a spy. But unlike other spies, Flash does not keep his work secret. In fact, he shares the results of these missions with more than 127,000 followers—including many soldiers and government officials—on several public social media channels. Earlier this year, for instance, he described how he had recorded five different Russian reconnaissance drones in a single night—one of which was flying directly above his van.

“Brothers from the Armed Forces of Ukraine, I am trying to inspire you,” he posted on his Facebook page in February, encouraging Ukrainian soldiers to learn how to recognize enemy drone signals as he does. “You will spread your wings, you will understand over time how to understand distance and, at some point, you will save the lives of dozens of your colleagues.”

Drones have come to define the brutal conflict that has now dragged on for more than two and a half years. And most rely on radio communications—a technology that Flash has obsessed over since childhood. So while Flash is now a civilian, the former officer has still taken it upon himself to inform his country’s defense in all matters related to radio.

As well as the frontline information he shares on his public channels, he runs a “support service” for almost 2,000 military communications specialists on Signal and writes guides for building anti-drone equipment on a tight budget. “He’s a celebrity,” one special forces officer recently shouted to me over the thump of music in a Kyiv techno club. He’s “like a ray of sun,” an aviation specialist in Ukraine’s army told me. Flash tells me that he gets 500 messages every day asking for help.

Despite this reputation among rank-and-file service members—and maybe because of it—Flash has also become a source of some controversy among the upper echelons of Ukraine’s military, he tells me. The Armed Forces of Ukraine declined multiple requests for comment, but Flash and his colleagues claim that some high-ranking officials perceive him as a security threat, worrying that he shares too much information and doesn’t do enough to secure sensitive intel. As a result, some refuse to support or engage with him. Others, Flash says, pretend he doesn’t exist. Either way, he believes they are simply insecure about the value of their own contributions—“because everybody knows that Serhii Flash is not sitting in Kyiv like a colonel in the Ministry of Defense,” he tells me in the abrasive fashion that I’ve come to learn is typical of his character. 

But above all else, hours of conversations with numerous people involved in Ukraine’s defense, including frontline signalmen and volunteers, have made clear that even if Flash is a complicated figure, he’s undoubtedly an influential one. His work has become greatly important to those fighting on the ground, and he recently received formal recognition from the military for his contributions to the fight, with two medals of commendation—one from the commander of Ukraine’s ground forces, the other from the Ministry of Defense. 

With a handheld directional antenna and a spectrum analyzer, Flash can scan for hostile signals.
EMRE ÇAYLAK

Despite a small number of semi-autonomous machines with a reduced reliance on radio communications, the drones that saturate the skies above the battlefield will continue to largely depend on this technology for the foreseeable future. And in this race for survival—as each side constantly tries to best the other, only to start all over again when the other inevitably catches up—Ukrainian soldiers need to develop creative solutions, and fast. As Ukraine’s wartime radio guru, Flash may just be one of their best hopes for doing that. 

“I know nothing about his background,” says “Igrok,” who works with drones in Ukraine’s 110th Mechanized Brigade and whom we are identifying by his call sign, as is standard military practice. “But I do know that most engineers and all pilots know nothing about radios and antennas. His job is definitely one of the most powerful forces keeping Ukraine’s aerial defense in good condition.”

And given the mounting evidence that both militaries and militant groups in other parts of the world are now adopting drone tactics developed in Ukraine, it’s not only his country’s fate that Flash may help to determine—but also the ways that armies wage war for years to come.

A prescient hobby

Before I can even start asking questions during our meeting in May, Flash is rummaging around in the back of the Flash-mobile, pulling out bits of gear for his own version of show-and-tell: a drone monitor with a fin-shaped antenna; a walkie-talkie labeled with a sticker from Russia’s state security service, the FSB; an approximately 1.5-meter-long foldable antenna that he says probably came from a US-made Abrams tank.

Flash has parked on a small wooded road beside the Kyiv Sea, an enormous water reservoir north of the capital. He’s wearing a khaki sweat-wicking polo shirt, combat trousers, and combat boots, with a Glock 19 pistol strapped to his hip. (“I am a threat to the enemy,” he tells me, explaining that he feels he has to watch his back.) As we talk, he moves from one side to the other, as if the electromagnetic waves that he’s studied since childhood have somehow begun to control the motion of his body.

Now 49, Flash grew up in a suburb of Kyiv in the ’80s. His father, who was a colonel in the Soviet army, recalls bringing home broken radio equipment for his preteen son to tinker with. Flash showed talent from the start. He attended an after-school radio club, and his father fixed an antenna to the roof of their apartment for him. Later, Flash began communicating with people in countries beyond the Iron Curtain. “It was like an open door to the big world for me,” he says.

Flash recalls with amusement a time when a letter from the KGB arrived at his family home, giving his father the fright of his life. His father didn’t know that his son had sent a message on a prohibited radio frequency, and someone had noticed. Following the letter, when Flash reported to the service’s office in downtown Kyiv, his teenage appearance confounded them. Boy, what are you doing here? Flash recalls an embarrassed official saying. 

Ukraine had been a hub of innovation as part of the Soviet Union. But by the time Flash graduated from military communications college in 1997, Ukraine had been independent for six years, and corruption and a lack of investment had stripped away the armed forces’ former grandeur. Flash spent just a year working in a military radio factory before he joined a private communications company developing Ukraine’s first mobile network, where he worked with technologies far more advanced than what he had used in the military. The  project was called “Flash.” 

A decade and a half later, Flash had risen through the ranks of the industry to become head of department at the progenitor to the telecommunications company Vodafone Ukraine. But boredom prompted him to leave and become an entrepreneur. His many projects included a successful e-commerce site for construction services and a popular video game called Isotopium: Chernobyl, which he and a friend based on the “really neat concept,” according to a PC Gamer review, of allowing players to control real robots (fitted with radios, of course) around a physical arena. Released in 2019, it also received positive reviews from Reuters and BBC News.

But within just a few years, an unexpected attack would hurl his country into chaos—and upend Flash’s life. 

“I am here to help you with technical issues,” Flash remembers writing to his Signal group when he first started offering advice. “Ask me anything and I will try to find the answer for you.”
EMRE ÇAYLAK

By early 2022, rumors were growing of a potential attack from Russia. Though he was still working on Isotopium, Flash began to organize a radio network across the northern suburbs of Kyiv in preparation. Near his home, he set up a repeater about 65 meters above ground level that could receive and then rebroadcast transmissions from all the radios in its network across a 200-square-kilometer area. Another radio amateur programmed and distributed handheld radios.

When Russian forces did invade, on February 24, they took both fiber-optic and mobile networks offline, as Flash had anticipated. The radio network became the only means of instant communications for civilians and, critically, volunteers mobilizing to fight in the region, who used it to share information about Russian troop movements. Flash fed this intel to several professional Ukrainian army units, including a unit of special reconnaissance forces. He later received an award from the head of the district’s military administration for his part in Kyiv’s defense. The head of the district council referred to Flash as “one of the most worthy people” in the region.

Yet it was another of Flash’s projects that would earn him renown across Ukraine’s military.

Despite being more than 100 years old, radio technology is still critical in almost all aspects of modern warfare, from secure communications to satellite-guided missiles. But the decline of Ukraine’s military, coupled with the movement of many of the country’s young techies into lucrative careers in the growing software industry, created a vacuum of expertise. Flash leaped in to fill it.

Within roughly a month of Russia’s incursion, Flash had created a private group called “Military Signalmen” on the encrypted messaging platform Signal, and invited civilian radio experts from his personal network to join alongside military communications specialists. “I am here to help you with technical issues,” he remembers writing to the group. “Ask me anything and I will try to find the answer for you.”

The kinds of questions that Flash and his civilian colleagues answered in the first months were often basic. Group members wanted to know how to update the firmware on their devices, reset their radios’ passwords, or set up the internal communications networks for large vehicles. Many of the people drafted as communications specialists in the Ukrainian military had little relevant experience; Flash claims that even professional soldiers lacked appropriate training and has referred to large parts of Ukraine’s military communications courses as “either nonsense or junk.” (The Korolov Zhytomyr Military Institute, where many communications specialists train, declined a request for comment.)

After Russia’s invasion of Ukraine, Flash transformed his VW van into a mobile radio intelligence center.
EMRE ÇAYLAK

He demonstrates handheld spectrum analyzers with custom Ukrainian firmware.

News of the Signal group spread by word of mouth, and it soon became a kind of 24-hour support service that communications specialists in every sector of Ukraine’s frontline force subscribed to. “Any military engineer can ask anything and receive the answer within a couple of minutes,” Flash says. “It’s a nice way to teach people very quickly.” 

As the war progressed into its second year, Military Signalmen became, to an extent, self-sustaining. Its members had learned enough to answer one another’s questions themselves. And this is where several members tell me that Flash has contributed the most value. “The most important thing is that he brought together all these communications specialists in one team,” says Oleksandr “Moto,” a technician at an EU mission in Kyiv and an expert in Motorola equipment, who has advised members of the group. (He asked to not be identified by his surname, due to security concerns.) “It became very efficient.”

Today, Flash and his partners continue to answer occasional questions that require more advanced knowledge. But over the past year, as the group demanded less of his time, Flash has begun to focus on a rapidly proliferating weapon for which his experience had prepared him almost perfectly: the drone.  

A race without end

The Joker-10 drone, one of Russia’s latest additions to its arsenal, is equipped with a hibernation mechanism, Flash warned his Facebook followers in March. This feature allows the operator to fly it to a hidden location, leave it there undetected, and then awaken it when it’s time to attack. “It is impossible to detect the drone using radio-electronic means,” Flash wrote. “If you twist and turn it in your hands—it will explode.” 

This is just one example of the frequent developments in drone engineering that Ukrainian and Russian troops are adapting to every day. 

Larger strike drones similar to the US-made Reaper have been familiar in other recent conflicts, but sophisticated air defenses have rendered them less dominant in this war. Ukraine and Russia are developing and deploying vast numbers of other types of drones—including the now-notorious “FPV,” or first-person view, drone that pilots operate by wearing goggles that stream video of its perspective. These drones, which can carry payloads large enough to destroy tanks, are cheap (costing as little as $400), easy to produce, and difficult to shoot down. They use direct radio communications to transmit video feeds, receive commands, and navigate.

""
A Ukrainian soldier prepares an FPV drone equipped with dummy ammunition for a simulated flight operation.
MARCO CORDONE/SOPA IMAGES/SIPA USA VIA AP IMAGES

But their reliance on radio technology is a major vulnerability, because enemies can disrupt the signals that the drones emit—making them far less effective, if not inoperable. This form of electronic warfare—which most often involves emitting a more powerful signal at the same frequency as the operator’s—is called “jamming.”

Jamming, though, is an imperfect solution. Like drones, jammers themselves emit radio signals that can enable enemies to locate them. There are also effective countermeasures to bypass jammers. For example, a drone operator can use a tactic called “frequency hopping,” rapidly jumping between different frequencies to avoid a jammer’s signal. But even this method can be disrupted by algorithms that calculate the hopping patterns.

For this reason, jamming is a frequent focus of Flash’s work. In a January post on his Telegram channel, for instance, which people viewed 48,000 times, Flash explained how jammers used by some Ukrainian tanks were actually disrupting their own communications. “The cause of the problems is not direct interference with the reception range of the radio station, but very powerful signals from several [electronic warfare] antennae,” he wrote, suggesting that other tank crews experiencing the same problem might try spreading their antennas across the body of the tank. 

It is all part of an existential race in which Russia and Ukraine are constantly hunting for new methods of drone operation, drone jamming, and counter-jamming—and there’s no end in sight. In March, for example, Flash says, a frontline contact sent him photos of a Russian drone with what looks like a 10-kilometer-long spool of fiber-optic cable attached to its rear—one particularly novel method to bypass Ukrainian jammers. “It’s really crazy,” Flash says. “It looks really strange, but Russia showed us that this was possible.”

Flash’s trips to the front line make it easier for him to track developments like this. Not only does he monitor Russian drone activity from his souped-up VW, but he can study the problems that soldiers face in situ and nurture relationships with people who may later send him useful intel—or even enemy equipment they’ve seized. “The main problem is that our generals are located in Kyiv,” Flash says. “They send some messages to the military but do not understand how these military people are fighting on the front.”

Besides the advice he provides to Ukrainian troops, Flash also publishes online his own manuals for building and operating equipment that can offer protection from drones. Building their own tools can be soldiers’ best option, since Western military technology is typically expensive and domestic production is insufficient. Flash recommends buying most of the parts on AliExpress, the Chinese e-commerce platform, to reduce costs.

While all his activity suggests a close or at least cooperative relationship between Flash and Ukraine’s military, he sometimes finds himself on the outside looking in. In a post on Telegram in May, as well as during one of our meetings, Flash shared one of his greatest disappointments of the war: the military’s refusal of his proposal to create a database of all the radio frequencies used by Ukrainian forces. But when I mentioned this to an employee of a major electronic warfare company, who requested anonymity to speak about the sensitive subject, he suggested that the only reason Flash still complains about this is that the military hasn’t told him it already exists. (Given its sensitivity, MIT Technology Review was unable to independently confirm the existence of this database.) 

Flash believes that generals in Kyiv “do not understand how these military people are fighting on the front.” So even though he doesn’t like the risks they involve, he takes trips to the frontline about once a month.
EMRE ÇAYLAK

This anecdote is emblematic of Flash’s frustration with a military complex that may not always want his involvement. Ukraine’s armed forces, he has told me on several occasions, make no attempt to collaborate with him in an official manner. He claims not to receive any financial support, either. “I’m trying to help,” he says. “But nobody wants to help me.”

Both Flash and Yurii Pylypenko, another radio enthusiast who helps Flash manage his Telegram channel, say military officials have accused Flash of sharing too much information about Ukraine’s operations. Flash claims to verify every member of his closed Signal groups, which he says only discuss “technical issues” in any case. But he also admits the system is not perfect and that Russians could have gained access in the past. Several of the soldiers I interviewed for this story also claimed to have entered the groups without Flash’s verification process. 

It’s ultimately difficult to determine if some senior staff in the military hold Flash at arm’s length because of his regular, often strident criticism—or whether Flash’s criticism is the result of being held at arm’s length. But it seems unlikely either side’s grievances will subside soon; Pylypenko claims that senior officers have even tried to blackmail him over his involvement in Flash’s work. “They blame my help,” he wrote to me over Telegram, “because they think Serhii is a Russian agent reposting Russian propaganda.” 

Is the world prepared?

Flash’s greatest concern now is the prospect of Russia overwhelming Ukrainian forces with the cheap FPV drones. When they first started deploying FPVs, both sides were almost exclusively targeting expensive equipment. But as production has increased, they’re now using them to target individual soldiers, too. Because of Russia’s production superiority, this poses a serious danger—both physical and psychological—to Ukrainian soldiers. “Our army will be sitting under the ground because everybody who goes above ground will be killed,” Flash says. Some reports suggest that the prevalence of FPVs is already making it difficult for soldiers to expose themselves at all on the battlefield.

To combat this threat, Flash has a grand yet straightforward idea. He wants Ukraine to build a border “wall” of jamming systems that cover a broad range of the radio spectrum all along the front line. Russia has already done this itself with expensive vehicle-based systems, but these present easy targets for Ukrainian drones, which have destroyed several of them. Flash’s idea is to use a similar strategy, albeit with smaller, cheaper systems that are easier to replace. He claims, however, that military officials have shown no interest.

Although Flash is unwilling to divulge more details about this strategy (and who exactly he pitched it to), he believes that such a wall could provide a more sustainable means of protecting Ukrainian troops. Nevertheless, it’s difficult to say how long such a defense might last. Both sides are now in the process of developing artificial-intelligence programs that allow drones to lock on to targets while still outside enemy jamming range, rendering them jammer-proof when they come within it. Flash admits he is concerned—and he doesn’t appear to have a solution.

Flash admits he is worried about Russia overwhelming Ukrainian forces with the cheap FPV drones: “Our army will be sitting under the ground because everybody who goes above ground will be killed.”
EMRE ÇAYLAK

He’s not alone. The world is entirely unprepared for this new type of warfare, says Yaroslav Kalinin, a former Ukrainian intelligence officer and the CEO of Infozahyst, a manufacturer of equipment for electronic warfare. Kalinin recounts talking at an electronic-warfare-focused conference in Washington, DC, last December where representatives from some Western defense companies weren’t able to recognize the basic radio signals emitted by different types of drones. “Governments don’t count [drones] as a threat,” he says. “I need to run through the streets like a prophet—the end is near!”

Nevertheless, Ukraine has become, in essence, a laboratory for a new era of drone warfare—and, many argue, a new era of warfare entirely. Ukraine’s and Russia’s soldiers are its technicians. And Flash, who sometimes sleeps curled up in the back of his van while on the road, is one of its most passionate researchers. “Military developers from all over the world come to us for experience and advice,” he says. Only time will tell whether their contributions will be enough to see Ukraine through to the other side of this war. 

Charlie Metcalfe is a British journalist. He writes for magazines and newspapers, including Wired, the Guardian, and MIT Technology Review.

Happy birthday, baby! What the future holds for those born today

Happy birthday, baby.

You have been born into an era of intelligent machines. They have watched over you almost since your conception. They let your parents listen in on your tiny heartbeat, track your gestation on an app, and post your sonogram on social media. Well before you were born, you were known to the algorithm. 

Your arrival coincided with the 125th anniversary of this magazine. With a bit of luck and the right genes, you might see the next 125 years. How will you and the next generation of machines grow up together? We asked more than a dozen experts to imagine your joint future. We explained that this would be a thought experiment. What I mean is: We asked them to get weird. 

Just about all of them agreed on how to frame the past: Computing shrank from giant shared industrial mainframes to personal desktop devices to electronic shrapnel so small it’s ambient in the environment. Previously controlled at arm’s length through punch card, keyboard, or mouse, computing became wearable, moving onto—and very recently into—the body. In our time, eye or brain implants are only for medical aid; in your time, who knows? 

In the future, everyone thinks, computers will get smaller and more plentiful still. But the biggest change in your lifetime will be the rise of intelligent agents. Computing will be more responsive, more intimate, less confined to any one platform. It will be less like a tool, and more like a companion. It will learn from you and also be your guide.

What they mean, baby, is that it’s going to be your friend.

Present day to 2034 
Age 0 to 10

When you were born, your family surrounded you with “smart” things: rockers, monitors, lamps that play lullabies.  

DAVID BISKUP

But not a single expert name-checked those as your first exposure to technology. Instead, they mentioned your parents’ phone or smart watch. And why not? As your loved ones cradle you, that deliciously blinky thing is right there. Babies learn by trial and error, by touching objects to see what happens. You tap it; it lights up or makes noise. Fascinating!

Cognitively, you won’t get much out of that interaction between birth and age two, says Jason Yip, an associate professor of digital youth at the University of Washington. But it helps introduce you to a world of animate objects, says Sean Follmer, director of the SHAPE Lab in Stanford’s mechanical engineering department, which explores haptics in robotics and computing. If you touch something, how does it respond?

You are the child of millennials and Gen Z—digital natives, the first influencers. So as you grow, cameras are ubiquitous. You see yourself onscreen and learn to smile or wave to the people on the other side. Your grandparents read to you on FaceTime; you photobomb Zoom meetings. As you get older, you’ll realize that images of yourself are a kind of social currency. 

Your primary school will certainly have computers, though we’re not sure how educators will balance real-world and onscreen instruction, a pedagogical debate today. But baby, school is where our experts think you will meet your first intelligent agent, in the form of a tutor or coach. Your AI tutor might guide you through activities that combine physical tasks with augmented-­reality instruction—a sort of middle ground. 

Some school libraries are becoming more like makerspaces, teaching critical thinking along with building skills, says Nesra Yannier, a faculty member in the Human-Computer Interaction Institute at Carnegie Mellon University. She is developing NoRILLA, an educational system that uses mixed reality—a combination of physical and virtual reality—to teach science and engineering concepts. For example, kids build wood-block structures and predict, with feedback from a cartoon AI gorilla, how they will fall. 

Learning will be increasingly self-­directed, says Liz Gerber, co-director of the Center for Human-Computer Interaction and Design at Northwestern University. The future classroom is “going to be hyper-­personalized.” AI tutors could help with one-on-one instruction or repetitive sports drills. 

All of this is pretty novel, so our experts had to guess at future form factors. Maybe while you’re learning, an unobtrusive bracelet or smart watch tracks your performance and then syncs data with a tablet, so your tutor can help you practice. 

What will that agent be like? Follmer, who has worked with blind and low-vision students, thinks it might just be a voice. Yannier is partial to an animated character. Gerber thinks a digital avatar could be paired with a physical version, like a stuffed animal—in whatever guise you like. “It’s an imaginary friend,” says Gerber. “You get to decide who it is.” 

Not everybody is sold on the AI tutor. In Yip’s research, kids often tell him AI-enabled technologies are … creepy. They feel unpredictable or scary or like they seem to be watching

Kids learn through social interactions, so he’s also worried about technologies that isolate. And while he thinks AI can handle the cognitive aspects of tutoring, he’s not sure about its social side. Good teachers know how to motivate, how to deal with human moods and biology. Can a machine tell when a child is being sarcastic, or redirect a kid who is goofing off in the bathroom? When confronted with a meltdown, he asks, “is the AI going to know this kid is hungry and needs a snack?”

2040
Age 16

By the time you turn 16, you’ll likely still live in a world shaped by cars: highways, suburbs, climate change. But some parts of car culture may be changing. Electric chargers might be supplanting gas stations. And just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.  

Paola Meraz, a creative director of interaction design at BMW’s Designworks, describes that agent as “your friend on the road.” William Chergosky, chief designer at Calty Design Research, Toyota’s North American design studio, calls it “exactly like a friend in the car.”

While you are young, Chergosky says, it’s your chaperone, restricting your speed or routing you home at curfew. It tells you when you’re near In-N-Out, knowing your penchant for their animal fries. And because you want to keep up with your friends online and in the real world, the agent can comb your social media feeds to see where they are and suggest a meetup. 

Just as an intelligent agent assisted in your schooling, now one will drive with you—and probably for you.

Cars have long been spots for teen hangouts, but as driving becomes more autonomous, their interiors can become more like living rooms. (You’ll no longer need to face the road and an instrument panel full of knobs.) Meraz anticipates seats that reposition so passengers can talk face to face, or game. “Imagine playing a game that interacts with the world that you are driving through,” she says, or “a movie that was designed where speed, time of day, and geographical elements could influence the storyline.” 

people riding on top of a smart car
DAVID BISKUP

Without an instrument panel, how do you control the car? Today’s minimalist interiors feature a dash-mounted tablet, but digging through endless onscreen menus is not terribly intuitive. The next step is probably gestural or voice control—ideally, through natural language. The tipping point, says Chergosky, will come when instead of giving detailed commands, you can just say: “Man, it is hot in here. Can you make it cooler?”

An agent that listens in and tracks your every move raises some strange questions. Will it change personalities for each driver? (Sure.) Can it keep a secret? (“Dad said he went to Taco Bell, but did he?” jokes Chergosky.) Does it even have to stay in the car? 

Our experts say nope. Meraz imagines it being integrated with other kinds of agents—the future versions of Alexa or Google Home. “It’s all connected,” she says. And when your car dies, Chergosky says, the agent does not. “You can actually take the soul of it from vehicle to vehicle. So as you upgrade, it’s not like you cut off that relationship,” he says. “It moves with you. Because it’s grown with you.”

2049
Age 25

By your mid-20s, the agents in your life know an awful lot about you. Maybe they are, indeed, a single entity that follows you across devices and offers help where you need it. At this point, the place where you need the most help is your social life. 

Kathryn Coduto, an assistant professor of media science at Boston University who studies online dating, says everyone’s big worry is the opening line. To her, AI could be a disembodied Cyrano that whips up 10 options or workshops your own attempts. Or maybe it’s a dating coach. You agree to meet up with a (real) person online, and “you have the AI in a corner saying ‘Hey, maybe you should say this,’ or ‘Don’t forget this.’ Almost like a little nudge.”

“There is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

T. Makana Chock, director, the Extended Reality Lab, Syracuse University

Virtual first dates might solve one of our present-day conundrums: Apps make searching for matches easier, but you get sparse—and perhaps inaccurate—info about those people. How do you know who’s worth meeting in real life? Building virtual dating into the app, Coduto says, could be “an appealing feature for a lot of daters who want to meet people but aren’t sure about a large initial time investment.”

T. Makana Chock, who directs the Extended Reality Lab at Syracuse University, thinks things could go a step further: first dates where both parties send an AI version of themselves in their place. “That would tell both of you that this is working—or this is definitely not going to work,” Chock says. If the date is a dud—well, at least you weren’t on it.

Or maybe you will just date an entirely virtual being, says Sun Joo (Grace) Ahn, who directs the Center for Advanced Computer-Human Ecosystems at the University of Georgia. Or you’ll go to a virtual party, have an amazing time, “and then later on you realize that you were the only real human in that entire room. Everybody else was AI.”

This might sound odd, says Ahn, but “humans are really good at building relationships with nonhuman entities.” It’s why you pour your heart out to your dog—or treat ChatGPT like a therapist. 

There is a problem, though, when virtual relationships become too accommodating, says Chock: If you get used to agents that are tailored to please you, you get less skilled at dealing with real people and risking awkwardness or rejection. “You still need to have human interaction,” she says. “And there is some concern that we are going to see some people who are just like, ‘Nope, this is all I want. Why go out and do that when I can stay home with my partner, my virtual buddy?’”

By now, social media, online dating, and livestreaming have likely intertwined and become more immersive. Engineers have shrunk the obstacles to true telepresence: internet lag time, the uncanny valley, and clunky headsets, which may now be replaced by something more like glasses or smart contact lenses. 

Online experiences may be less like observing someone else’s life and more like living it. Imagine, says Follmer: A basketball star wears clothing and skin sensors that track body position, motion, and forces, plus super-thin gloves that sense the texture of the ball. You, watching from your couch, wear a jersey and gloves made of smart textiles, woven with actuators that transmit whatever the player feels. When the athlete gets shoved, Follmer says, your fan gear can really shove you right back.”

Gaming is another obvious application. But it’s not the likely first mover in this space. Nobody else wants to say this on the record, so I will: It’s porn. (Baby, ask your parents and/or AI tutor when you’re older.)

DAVID BISKUP

By your 20s, you are probably wrestling with the dilemmas of a life spent online and on camera. Coduto thinks you might rebel, opting out of social media because your parents documented your first 18 years without permission. As an adult, you’ll want tighter rules for privacy and consent, better ways to verify authenticity, and more control over sensitive materials, like a button that could nuke your old sexts.

But maybe it’s the opposite: Now you are an influencer yourself. If so, your body can be your display space. Today, wearables are basically boxes of electronics strapped onto limbs. Tomorrow, hopes Cindy Hsin-Liu Kao, who runs the Hybrid Body Lab at Cornell University, they will be more like your own skin. Kao develops wearables like color-changing eyeshadow stickers and mini nail trackpads that can control a phone or open a car door. In the not-too-distant future, she imagines, “you might be able to rent out each of your fingernails as an ad for social media.” Or maybe your hair: Weaving in super-thin programmable LED strands could make it a kind of screen. 

What if those smart lenses could be display spaces too? “That would be really creepy,” she muses. “Just looking into someone’s eyes and it’s, like, CNN.”

2059
Age 35

By now, you’ve probably settled into domestic life—but it might not look much like the home you grew up in. Keith Evan Green, a professor of human-centered design at Cornell, doesn’t think we should imagine a home of the future. “I would call it a room of the future,” he says, because it will be the place for everything—work, school, play. This trend was hastened by the covid pandemic.

Your place will probably be small if you live in a big city. The uncertainties of climate change and transportation costs mean we can’t build cities infinitely outward. So he imagines a reconfigurable architectural robotic space: Walls move, objects inflate or unfold, furniture appears or dissolves into surfaces or recombines. Any necessary computing power is embedded. The home will finally be what Le Corbusier imagined: a machine for living in.

Green pictures this space as spartan but beautiful, like a temple—a place, he says, to think and be. “I would characterize it as this capacious monastic cell that is empty of most things but us,” he says.

Our experts think your home, like your car, will respond to voice or gestural control. But it will make some decisions autonomously, learning by observing you: your motion, location, temperature. 

Ivan Poupyrev, CEO and cofounder of Archetype AI, says we’ll no longer control each smart appliance through its own app. Instead, he says, think of the home as a stage and you as the director. “You don’t interact with the air conditioner. You don’t interact with a TV,” he says. “You interact with the home as a total.” Instead of telling the TV to play a specific program, you make high-level demands of the entire space: “Turn on something interesting for me; I’m tired.” Or: “What is the plan for tomorrow?”

Stanford’s Follmer says that just as computing went from industrial to personal to ubiquitous, so will robotics. Your great-grandparents envisioned futuristic homes cared for by a single humanoid robot—like Rosie from The Jetsons. He envisions swarms of maybe 100 bots the size of quarters that materialize to clean, take out the trash, or bring you a cold drink. (“They know ahead of time, even before you do, that you’re thirsty,” he says.)

DAVID BISKUP

Baby, perhaps now you have your own baby. The technologies of reproduction have changed since you were born. For one thing, says Gerber, fertility tracking will be way more accurate: “It is going to be like weather prediction.” Maybe, Kao says, flexible fabric-like sensors could be embedded in panty liners to track menstrual health. Or, once the baby arrives, in nipple stickers that nursing parents could apply to track biofluid exchange. If the baby has trouble latching, maybe the sticker’s capacitive touch sensors could help the parent find a better position.

Also, goodbye to sleep deprivation. Gerber envisions a device that, for lack of an existing term, she’s calling a“baby handler”—picture an exoskeleton crossed with a car seat. It’s a late-night soothing machine that rocks, supplies pre-pumped breast milk, and maybe offers a bidet-like “cleaning and drying situation.”For your children, perhaps, this is their first experience of being close to a machine. 

2074
Age 50

Now you are at the peak of your career. For professions heading toward AI automation, you may be the “human in the loop” who oversees a machine doing its tasks. The 9-to-5 workday, which is crumbling in our time, might be totally atomized into work-from-home fluidity or earn-as-you-go gig work.

Ahn thinks you might start the workday by lying in bed and checking your messages—on an implanted contact lens. Everyone loves a big screen, and putting it in your eye effectively gives you “the largest monitor in the world,” she says. 

You’ve already dabbled with AI selves for dating. But now virtual agents are more photorealistic, and they can mimic your voice and mannerisms. Why not make one go to meetings for you?

DAVID BISKUP

Kori Inkpen, who studies human-­computer interaction at Microsoft Research, calls this your “ditto”—more formally, an embodied mimetic agent, meaning it represents a specific person. “My ditto looks like me, acts like me, sounds like me, knows sort of what I know,” she says. You can instruct it to raise certain points and recap the conversation for you later. Your colleagues feel as if you were there, and you get the benefit of an exchange that’s not quite real time, but not as asynchronous as email. “A ditto starts to blend this reality,” Inkpen says.

In our time, augmented reality is slowly catching on as a tool for workers whose jobs require physical presence and tangible objects. But experts worry that once the last baby boomers retire, their technical expertise will go with them. Perhaps they can leave behind a legacy of training simulations.

Inkpen sees DIY opportunities. Say your fridge breaks. Instead of calling a repair person, you boot up an AR tutorial on glasses, a tablet, or a projection that overlays digital instructions atop the appliance. Follmer wonders if haptic sensors woven into gloves or clothing would let people training for highly specialized jobs—like surgery—literally feel the hand motions of experienced professionals.

For Poupyrev, the implications are much bigger. One way to think about AI is “as a storage medium,” he says. “It’s a preservation of human knowledge.” A large language model like ChatGPT is basically a compendium of all the text information people have put online. Next, if we feed models not only text but real-world sensor data that describes motion and behavior, “it becomes a very compressed presentation not of just knowledge, but also of how people do things.” AI can capture how to dance, or fix a car, or play ice hockey—all the skills you cannot learn from words alone—and preserve this knowledge for the future.

2099
Age 75

By the time you retire, families may be smaller, with more older people living solo. 

Well, sort of. Chaiwoo Lee, a research scientist at the MIT AgeLab, thinks that in 75 years, your home will be a kind of roommate—“someone who cohabitates that space with you,” she says. “It reacts to your feelings, maybe understands you.” 

By now, a home’s AI could be so good at deciphering body language that if you’re spending a lot of time on the couch, or seem rushed or irritated, it could try to lighten your mood. “If it’s a conversational agent, it can talk to you,” says Lee. Or it might suggest calling a loved one. “Maybe it changes the ambiance of the home to be more pleasant.”

The home is also collecting your health data, because it’s where you eat, shower, and use the bathroom. Passive data collection has advantages over wearable sensors: You don’t have to remember to put anything on. It doesn’t carry the stigma of sickness or frailty. And in general, Lee says, people don’t start wearing health trackers until they are ill, so they don’t have a comparative baseline. Perhaps it’s better to let the toilet or the mirror do the tracking continuously. 

Green says interactive homes could help people with mobility and cognitive challenges live independently for longer. Robotic furnishings could help with lifting, fetching, or cleaning. By this time, they might be sophisticated enough to offer support when you need it and back off when you don’t.  

Kao, of course, imagines the robotics embedded in fabric: garments that stiffen around the waist to help you stand, a glove that reinforces your grip.

DAVID BISKUP

If getting from point A to point B is becoming difficult, maybe you can travel without going anywhere. Green, who favors a blank-slate room, wonders if you’ll have a brain-machine interface that lets you change your surroundings at will. You think about, say, a jungle, and the wallpaper display morphs. The robotic furniture adjusts its topography. “We want to be able to sit on the boulder or lie down on the hammock,” he says.

Anne Marie Piper, an associate professor of informatics at UC Irvine who studies older adults, imagines something similar—minus the brain chip—in the context of a care home, where spaces could change to evoke special memories, like your honeymoon in Paris. “What if the space transforms into a café for you that has the smells and the music and the ambience, and that is just a really calming place for you to go?” she asks. 

Gerber is all for virtual travel: It’s cheaper, faster, and better for the environment than the real thing. But she thinks that for a truly immersive Parisian experience, we’ll need engineers to invent … well, remote bread. Something that lets you chew on a boring-yet-nutritious source of calories while stimulating your senses so you get the crunch, scent, and taste of the perfect baguette.

2149
Age 125

We hope that your final years will not be lonely or painful. 

Faraway loved ones can visit by digital double, or send love through smart textiles: Piper imagines a scarf that glows or warms when someone is thinking of you, Kao an on-skin device that simulates the touch of their hand. If you are very ill, you can escape into a soothing virtual world. Judith Amores, a senior researcher at Microsoft Research, is working on VR that responds to physiological signals. Today, she immerses hospital patients in an underwater world of jellyfish that pulse at half of an average person’s heart rate for a calming effect. In the future, she imagines, VR will detect anxiety without requiring a user to wear sensors—maybe by smell.

“It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms.”

Tim Recuber, sociologist, Smith College

You might be pondering virtual immortality. Tim Recuber, a sociologist at Smith College and author of The Digital Departed, notes that today people create memorial websites and chatbots, or sign up for post-mortem messaging services. These offer some end-of-life comfort, but they can’t preserve your memory indefinitely. Companies go bust. Websites break. People move on; that’s how mourning works.

What about uploading your consciousness to the cloud? The idea has a fervent fan base, says Recuber. People hope to resurrect themselves into human or robotic bodies, or spend eternity as part of a hive mind or “a beam of laser light that can travel the cosmos.” But he’s skeptical that it’ll work, especially within 125 years. Plus, what if being a ghost in the machine is dreadful? “Embodiment is, as far as we know, a pretty key component to existence. And it might be pretty upsetting to actually be a full version of yourself in a computer,” he says. 

DAVID BISKUP

There is perhaps one last thing to try. It’s another AI. You curate this one yourself, using a lifetime of digital ephemera: your videos, texts, social media posts. It’s a hologram, and it hangs out with your loved ones to comfort them when you’re gone. Perhaps it even serves as your burial marker. “It is a little cool to think of cemeteries in the future that are literally haunted by motion-activated holograms,” Recuber says.

It won’t exist forever. Nothing does. But by now, maybe the agent is no longer your friend.

Maybe, at last, it is you.

Baby, we have caveats.

We imagine a world that has overcome the worst threats of our time: a creeping climate disaster; a deepening digital divide; our persistent flirtation with nuclear war; the possibility that a pandemic will kill us quickly, that overly convenient lifestyles will kill us slowly, or that intelligent machines will turn out to be too smart

We hope that democracy survives and these technologies will be the opt-in gadgetry of a thriving society, not the surveillance tools of dystopia. If you have a digital twin, we hope it’s not a deepfake. 

You might see these sketches from 2024 as a blithe promise, a warning, or a fever dream. The important thing is: Our present is just the starting point for infinite futures. 

What happens next, kid, depends on you. 


Kara Platoni is a science reporter and editor in Oakland, California.

The US wants to use facial recognition to identify migrant children as they age 

The US Department of Homeland Security (DHS) is looking into ways it might use facial recognition technology to track the identities of migrant children, “down to the infant,” as they age, according to John Boyd, assistant director of the department’s Office of Biometric Identity Management (OBIM), where a key part of his role is to research and develop future biometric identity services for the government.

As Boyd explained at a conference in June, the key question for OBIM is, “If we pick up someone from Panama at the southern border at age four, say, and then pick them up at age six, are we going to recognize them?”

Facial recognition technology (FRT) has traditionally not been applied to children, largely because training data sets of real children’s faces are few and far between, and consist of either low-quality images drawn from the internet or small sample sizes with little diversity. Such limitations reflect the significant sensitivities regarding privacy and consent when it comes to minors. 

According to Syracuse University’s Transactional Records Access Clearinghouse (TRAC), 339,234 children arrived at the US-Mexico border in 2022, the last year for which numbers are currently available. Of those children, 150,000 were unaccompanied—the highest annual number on record. If the face prints of even 1% of those children were in OBIM’s craniofacial structural progression initiative, the resulting data set would dwarf nearly all existing data sets of real children’s faces used for aging research.

Prior to publication of this story Boyd told MIT Technology Review that to the best of his knowledge, the agency has not yet started collecting data under the program, but he adds that as “the senior executive,” he would “have to get with [his] staff to see.” He could only confirm that his office is “funding” it. Despite repeated requests, Boyd did not provide any additional information. After publication, DHS denied that it had plans to collect facial images from minors under 14. 

Boyd described recent “rulemaking” at “some DHS components,” or sub-offices, that have removed age restrictions on the collection of biometric data. US Customs and Border Protection (CBP), the US Transportation Security Administration, and US Immigration and Customs Enforcement declined to comment before publication. US Citizenship and Immigration Services (USCIS) did not respond to multiple requests for comment. OBIM referred MIT Technology Review back to DHS’s main press office. 

DHS did not comment on the program prior to publication, but sent an emailed statement afterwards: “The Department of Homeland Security uses various forms of technology to execute its mission, including some biometric capabilities. DHS ensures all technologies, regardless of type, are operated under the established authorities and within the scope of the law. We are committed to protecting the privacy, civil rights, and civil liberties of all individuals who may be subject to the technology we use to keep the nation safe and secure.”

The agency later noted “DHS does not collect facial images from minors under 14, and has no current plans to do so for either operational or research purposes,” walking back Boyd’s statements. 

Boyd spoke publicly about the plan in June at the Federal Identity Forum and Exposition, an annual identity management conference for federal employees and contractors. But close observers of DHS that we spoke with—including a former official, representatives of two influential lawmakers who have spoken out about the federal government’s use of surveillance technologies, and immigrants’ rights organizations that closely track policies affecting migrants—were unaware of any new policies allowing biometric data collection of children under 14. 

That is not to say that all of them are surprised. “That tracks,” says one former CBP official who has visited several migrant processing centers on the US-Mexico border and requested anonymity to speak freely. He says “every center” he visited “had biometric identity collection, and everybody was going through it,” though he was unaware of a specific policy mandating the practice. “I don’t recall them separating out children,” he adds.
“The reports of CBP, as well as DHS more broadly, expanding the use of facial recognition technology to track migrant children is another stride toward a surveillance state and should be a concern to everyone who values privacy,” Justin Krakoff, deputy communications director for Senator Jeff Merkley of Oregon, said in a statement to MIT Technology Review. Merkley has been an outspoken critic of both DHS’s immigration policies and of government use of facial recognition technologies

Beyond concerns about privacy, transparency, and accountability, some experts also worry about biometric technologies targeting a population that has little recourse to provide—or withhold—consent. 

“If you arrive at a border … and you are faced with the impossible choice of either: get into a country if you give us your biometrics, or you don’t,” says Petra Molnar, author of The Walls Have Eyes: Surviving Migration in the Age of AI, “that completely vitiates informed consent,” she adds.

This question becomes even more challenging when it comes to children, says Ashley Gorski, a senior staff attorney with the American Civil Liberties Union. “There’s a significant intimidation factor, and children aren’t as equipped to consider long-term risks.”

Murky new rules

The Office of Biometric Identity Management, previously known as the US Visitor and Immigrant Status Indicator Technology Program (US-VISIT), was created after 9/11 with the specific mandate of collecting biometric data—initially only fingerprints and photographs—from all non-US citizens who sought to enter the country. 

Since then, DHS has begun collecting face prints, iris scans, and even DNA, among other modalities. It is also testing new ways of gathering this data—including through contactless fingerprint collection, which is currently deployed at five sites on the border, as Boyd shared in his conference presentation. 

Since 2023, CBP has been using a mobile app, CBP One, for asylum seekers to submit biometric data even before they enter the United States; users are required to take selfies periodically to verify their identity. The app has been riddled with problems, including technical glitches and facial recognition algorithms that are unable to recognize darker-skinned people. This is compounded by the fact that not every asylum seeker has a smartphone. 

Then, just after crossing into the United States, migrants submit to collection of more biometric data, including DNA. For a sense of scale, a recent report from Georgetown Law School’s Center on Privacy and Technology found that CBP has added 1.5 million DNA profiles, primarily from migrants crossing the border, to law enforcement databases since it began collecting DNA “from any person in CBP custody subject to fingerprinting” in January 2020, per rules enacted by the Department of Justice under the Trump administration. The researchers noted that an over-representation of immigrants—the majority of whom are people of color—in a DNA database used by law enforcement could subject them to over-policing and lead to other forms of bias. 

Generally, these programs only require information from individuals aged 14 to 79. DHS attempted to change this back in 2020, with proposed rules for USCIS and CBP that would have expanded biometric data collection dramatically, including by age. (USCIS’s proposed rule would have doubled the number of people from whom biometric data would be required, including any US citizen who sponsors an immigrant.) But the USCIS rule was withdrawn in the wake of the Biden administration’s new “priorities to reduce barriers and undue burdens in the immigration system.” Meanwhile, for reasons that remain unclear, the proposed CBP rule was never enacted. 

This would make it appear “contradictory” if DHS were to begin collecting the biometric data of children under 14, says Dinesh McCoy, a staff attorney with Just Futures Law, an immigrant rights group that tracks surveillance technologies. 

Neither Boyd nor DHS’s media office would confirm which specific policy changes he was referring to in his presentation, though MIT Technology Review has identified a 2017 memo, issued by then-Secretary of Homeland Security John F. Kelly, that encouraged DHS components to remove “age as a basis for determining when to collect biometrics.” The DHS’s Office of the Inspector General (OIG) quoted a DHS senior official as to this memo as the “overarching policy for biometrics at DHS” in a September 2023 report, though none of the press offices MIT Technology Review contacted—including the main DHS press office, OIG, and OBIM, among others—would confirm on the record whether this was still the relevant policy; we have not been able to confirm any related policy changes since then.

The OIG audit also found a number of fundamental issues related to DHS’s oversight of biometric data collection and use—including that its 10-year strategic framework for biometrics, covering 2015 to 2025, “did not accurately reflect the current state of biometrics across the Department, such as the use of facial recognition verification and identification.” Nor did it provide clear guidance for the consistent collection and use of biometrics across DHS, including age requirements. 

Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489. 

Some lawyers allege that changing the age limit for data collection via department policy, not by a federal rule, which requires a public comment period, would be problematic. McCoy, for instance, says any lack of transparency here amplifies the already “extremely challenging” task of “finding [out] in a systematic way how these technologies are deployed”—even though that is key for accountability.

Advancing the field

At the identity forum and in a subsequent conversation, Boyd explained that the initiative is meant to advance the development of effective FRT algorithms. Boyd leads OBIM’s Future Identity team, whose mission is to “research, review, assess, and develop technology, policy, and human factors that enable rapid, accurate, and secure identity services” and to make OBIM “the preferred provider for identity services within DHS.” 

Driven by high-profile cases of missing children, there has long been interest in understanding how children’s faces age. At the same time, there have been technical challenges to doing so, both preceding FRT and with it. 

At its core, facial recognition identifies individuals by comparing the geometry of various facial features in an original face print with subsequent images. Based on this comparison, a facial recognition algorithm assigns a percentage likelihood that there is a match. 

But as children grow and develop, their bone structure changes significantly, making it difficult for facial recognition algorithms to identify them over time. (These changes tend to be even more pronounced  in children under 14. In contrast, as adults age, the changes tend to be in the skin and muscle, and have less variation overall.) More data would help solve this problem, but there is a dearth of high-quality data sets of children’s faces with verifiable ages. 

“What we’re trying to do is to get large data sets of known individuals,” Boyd tells MIT Technology Review. That means taking high-quality face prints “under controlled conditions where we know we’ve got the person with the right name [and] the correct birth date”—or, in other words, where they can be certain about the “provenance of the data.” 

For example, one data set used for aging research consists of 305 celebrities’ faces as they aged from five to 32. But these photos, scraped from the internet, contain too many other variables—such as differing image qualities, lighting conditions, and distances at which they were taken—to be truly useful. Plus, speaking to the provenance issue that Boyd highlights, their actual ages in each photo can only be estimated. 

Another tactic is to use data sets of adult faces that have been synthetically de-aged. Synthetic data is considered more privacy-preserving, but it too has limitations, says Stephanie Schuckers, director of the Center for Identification Technology Research (CITeR). “You can test things with only the generated data,” Schuckers explains, but the question remains: “Would you get similar results to the real data?”

(Hosted at Clarkson University in New York, CITeR brings together a network of academic and government affiliates working on identity technologies. OBIM is a member of the research consortium.) 

Schuckers’s team at CITeR has taken another approach: an ongoing longitudinal study of a cohort of 231 elementary and middle school students from the area around Clarkson University. Since 2016, the team has captured biometric data every six months (save for two years of the covid-19 pandemic), including facial images. They have found that the open-source face recognition models they tested can in fact successfully recognize children three to four years after they were initially enrolled. 

But the conditions of this study aren’t easily replicable at scale. The study images are taken in a controlled environment, all the participants are volunteers, the researchers sought consent from parents and the subjects themselves, and the research was approved by the university’s Institutional Review Board. Schuckers’s research also promises to protect privacy by requiring other researchers to request access, and by providing facial datasets separately from other data that have been collected. 

What’s more, this research still has technical limitations, including that the sample is small, and it is overwhelmingly Caucasian, meaning it might be less accurate when applied to other races. 

Schuckers says she was unaware of DHS’s craniofacial structural progression initiative. 

Far-reaching implications

Boyd says OBIM takes privacy considerations seriously, and that “we don’t share … data with commercial industries.” Still, OBIM has “approximately 140” government partners with which it shares and receives information, according to a report by the Government Accountability Office, which has criticized it for poorly documenting its agreements. 

Even if the data does stay within the federal government, OBIM’s findings regarding the accuracy of FRT for children over time could nevertheless influence how—and when—the rest of the government collects biometric data, as well as whether the broader facial recognition industry may also market its services for children. (Indeed, Boyd says sharing “results,” or the findings of how accurate FRT algorithms are, is different than sharing the data itself.) 

That this technology is being targeted at people who are offered fewer privacy protections than would be afforded to US citizens is just part of the wider trend of using people from the developing world, whether they are migrants coming to the border or civilians in war zones, to help improve new technologies. 

In fact, Boyd previously helped advance the Department of Defense’s biometric systems in Iraq and Afghanistan, where he acknowledged that individuals were subject to different rules than would have been applied in many other contexts, despite the incredibly high stakes. Biometric data collected in those war zones—in some areas, from every fighting-age male—was used toidentify and target insurgents, and being misidentified could mean death. 

These projects subsequently played a substantial role in influencing the expansion of biometric data collection by the Department of Defense, which now happens globally. And architects of the program, like Boyd, have taken important roles in expanding the use of biometrics at other agencies. 

“It’s not an accident” that this development happens in the context of border zones, says Molnar. Borders are “the perfect laboratory for tech experimentation, because oversight is weak, discretion is baked into the decisions that get made … it allows the state to experiment in ways that it wouldn’t be allowed to in other spaces.” 

But, she notes, “just because it happens at the border doesn’t mean that that’s where it’s going to stay.”

Correction: An earlier version of this story said the DHS had plans to collect facial data from children under 14 , based on remarks by John Boyd. Following publication, the department said it had no current plans to do so. The story has also been updated to reflect DHS’s additional comments and clarifications throughout.

Do you have any additional information on DHS’s craniofacial structural progression initiative? Please reach out with a non-work email to tips@technologyreview.com or securely on Signal at 626.765.5489.

❌