Reading view

There are new articles available, click to refresh the page.

Science and technology stories in the age of Trump

Rather than analyzing the news this week, I thought I’d lift the hood a bit on how we make it. 

I’ve spent most of this year being pretty convinced that Donald Trump would be the 47th president of the United States. Even so, like most people, I was completely surprised by the scope of his victory. By taking the lion’s share not just in the Electoral College but also the popular vote, coupled with the wins in the Senate (and, as I write this, seemingly the House) and ongoing control of the courts, Trump has done far more than simply eke out a win. This level of victory will certainly provide the political capital to usher in a broad sweep of policy changes.

Some of these changes will be well outside our lane as a publication. But very many of President-elect Trump’s stated policy goals will have direct impacts on science and technology. Some of the proposed changes would have profound effects on the industries and innovations we’ve covered regularly, and for years. When he talks about his intention to end EV subsidies, hit the brakes on FTC enforcement actions on Big Tech, ease the rules on crypto, or impose a 60 percent tariff on goods from China, these are squarely in our strike zone and we would be remiss not to explore the policies and their impact in detail. 

And so I thought I would share some of my remarks from our edit meeting on Wednesday morning, when we woke up to find out that the world had indeed changed. I think it’s helpful for our audience if we are transparent and upfront about how we intend to operate, especially over the next several months that will likely be, well, chaotic. 

This is a moment when our jobs are more important than ever. There will be so much noise and heat out there in the coming weeks and months, and maybe even years. The next six months in particular will be a confusing time for a lot of people. We should strive to be the signal in that noise. 

We have extremely important stories to write about the role of science and technology in the new administration. There are obvious stories for us to take on in regards to climate, energy, vaccines, women’s health, IVF, food safety, chips, China, and I’m sure a lot more, that people are going to have all sorts of questions about. Let’s start by making a list of questions we have ourselves. Some of the people and technologies we cover will be ascendant in all sorts of ways. We should interrogate that power.  It’s important that we take care in those stories not to be speculative or presumptive. To always have the facts buttoned up. To speak the truth and be unassailable in doing so.

Do we drop everything and only cover this? No. But it will certainly be a massive story that affects nearly all others.

This election will be a transformative moment for society and the world. Trump didn’t just win, he won a mandate. And he’s going to change the country and the global order as a result.  The next few weeks will see so much speculation as to what it all means. So much fear, uncertainty, and doubt. There is an enormous amount of bullshit headed down the line. People will be hungry for sources they can trust. We should be there for that. Let’s leverage our credibility, not squander it. 

We are not the resistance. We just want to tell the truth. So let’s take a breath, and then go out there and do our jobs.

I like to tell our reporters and editors that our coverage should be free from either hype or cynicism. I think that’s especially true now. 

I’m also very interested to hear from our readers: What questions do you have? What are the policy changes or staffing decisions you are curious about? Please drop me a line at mat.honan@technologyreview.com I’m eager to hear from you. 

If someone forwarded you this edition of The Debrief, you can subscribe here.


Now read the rest of The Debrief

The News

Palmer Luckey, who was ousted from Facebook over his support for the last Trump administration and went into defense contracting, is poised to grow in influence under a second administration. He recently talked to MIT Technology Review about how the Pentagon is using mixed reality.

• What does Donald Trump’s relationship with Elon Musk mean for the global EV industry?

• The Biden administration was perceived as hostile to crypto. The industry can likely expect friendlier waters under Trump

• Some counter-programming: Life seeking robots could punch through Europa’s icy surface

• And for one more big take that’s not related to the election: AI vs quantum. AI could solve some of the most interesting scientific problems before big quantum computers become a reality


The Chat

Every week I’ll talk to one of MIT Technology Review’s reporters or editors to find out more about what they’ve been working on. This week, I chatted with Melissa Heikkilä about her story on how ChatGPT search paves the way for AI agents.

Mat: Melissa, OpenAI rolled out web search for ChatGPT last week. It seems pretty cool. But you got at a really interesting bigger picture point about it paving the way for agents. What does that mean?

Melissa: Microsoft tried to chip away at Google’s search monopoly with Bing, and that didn’t really work. It’s unlikely OpenAI will be able to make much difference either. Their best bet is try to get users used to a new way of finding information and browsing the web through virtual assistants that can do complex tasks. Tech companies call these agents. ChatGPT’s usefulness is limited by the fact that it can’t access the internet and doesn’t have the most up to date information. By integrating a really powerful search engine into the chatbot, suddenly you have a tool that can help you plan things and find information in a far more comprehensive and immersive way than traditional search, and this is a key feature of the next generation of AI assistants.

Mat: What will agents be able to do?

Melissa: AI agents can complete complex tasks autonomously and the vision is that they will work as a human assistant would — book your flights, reschedule your meetings, help with research, you name it. But I wouldn’t get too excited yet. The cutting-edge of AI tech can retrieve information and generate stuff, but it still lacks the reasoning and long-term planning skills to be really useful. AI tools like ChatGPT and Claude also can’t interact with computer interfaces, like clicking at stuff, very well. They also need to become a lot more reliable and stop making stuff up, which is still a massive problem with AI. So we’re still a long way away from the vision becoming reality! I wrote an explainer on agents a little while ago with more details.

Mat: Is search as we know it going away? Are we just moving to a world of agents that not only answer questions but also accomplish tasks?

Melissa: It’s really hard to say. We are so used to using online search, and it’s surprisingly hard to change people’s behaviors. Unless agents become super reliable and powerful, I don’t think search is going to go away.

Mat: By the way, I know you are in the UK. Did you hear we had an election over here in the US?

Melissa: LOL


The Recommendation

I’m just back from a family vacation in New York City, where I was in town to run the marathon. (I get to point this out for like one or two more weeks before the bragging gets tedious, I think.) While there, we went to see The Outsiders. Chat, it was incredible. (Which maybe should go without saying given that it won the Tony for best musical.) But wow. I loved the book and the movie as a kid. But this hit me on an entirely other level. I’m not really a cries-at-movies (or especially at musicals) kind of person but I was wiping my eyes for much of the second act. So were very many people sitting around me. Anyway. If you’re in New York, or if it comes to your city, go see it. And until then, the soundtrack is pretty amazing on its own. (Here’s a great example.)

OpenAI brings a new web search tool to ChatGPT

ChatGPT can now search the web for up-to-date answers to a user’s queries, OpenAI announced today. 

Until now, ChatGPT was mostly restricted to generating answers from its training data, which is current up to October 2023 for GPT-4o, and had limited web search capabilities. Searches about generalized topics will still draw on this information from the model itself, but now ChatGPT will automatically search the web in response to queries about recent information such as sports, stocks, or news of the day, and can deliver rich multi-media results. Users can also manually trigger a web search, but for the most part, the chatbot will make its own decision about when an answer would benefit from information taken from the web, says Adam Fry, OpenAI’s product lead for search.

“Our goal is to make ChatGPT the smartest assistant, and now we’re really enhancing its capabilities in terms of what it has access to from the web,” Fry tells MIT Technology Review. The feature is available today for the chatbot’s paying users. 

ChatGPT triggers a web search when the user asks about local restaurants in this example

While ChatGPT search, as it is known, is initially available to paying customers, OpenAI intends to make it available for free later, even when people are logged out. The company also plans to combine search with its voice features and Canvas, its interactive platform for coding and writing, although these capabilities will not be available in today’s initial launch.

The company unveiled a standalone prototype of web search in July. Those capabilities are now built directly into the chatbot. OpenAI says it has “brought the best of the SearchGPT experience into ChatGPT.” 

OpenAI is the latest tech company to debut an AI-powered search assistant, challenging similar tools from competitors such as Google, Microsoft, and startup Perplexity. Meta, too, is reportedly developing its own AI search engine. As with Perplexity’s interface, users of ChatGPT search can interact with the chatbot in natural language, and it will offer an AI-generated answer with sources and links to further reading. In contrast, Google’s AI Overviews offer a short AI-generated summary at the top of the website, as well as a traditional list of indexed links. 

These new tools could eventually challenge Google’s 90% market share in online search. AI search is a very important way to draw more users, says Chirag Shah, a professor at the University of Washington, who specializes in online search. But he says it is unlikely to chip away at Google’s search dominance. Microsoft’s high-profile attempt with Bing barely made a dent in the market, Shah says. 

Instead, OpenAI is trying to create a new market for more powerful and interactive AI agents, which can take complex actions in the real world, Shah says. 

The new search function in ChatGPT is a step toward these agents. 

It can also deliver highly contextualized responses that take advantage of chat histories, allowing users to go deeper in a search. Currently, ChatGPT search is able to recall conversation histories and continue the conversation with questions on the same topic. 

ChatGPT itself can also remember things about users that it can use later —sometimes it does this automatically, or you can ask it to remember something. Those “long-term” memories affect how it responds to chats. Search doesn’t have this yet—a new web search starts from scratch— but it should get this capability in the “next couple of quarters,” says Fry. When it does, OpenAI says it will allow it to deliver far more personalized results based on what it knows.

“Those might be persistent memories, like ‘I’m a vegetarian,’ or it might be contextual, like ‘I’m going to New York in the next few days,’” says Fry. “If you say ‘I’m going to New York in four days,’ it can remember that fact and the nuance of that point,” he adds. 

To help develop ChatGPT’s web search, OpenAI says it leveraged its partnerships with news organizations such as Reuters, the Atlantic, Le Monde, the Financial Times, Axel Springer, Condé Nast, and Time. However, its results include information not only from these publishers, but any other source online that does not actively block its search crawler.   

It’s a positive development that ChatGPT will now be able to retrieve information from these reputable online sources and generate answers based on them, says Suzan Verberne, a professor of natural-language processing at Leiden University, who has studied information retrieval. It also allows users to ask follow-up questions.

But despite the enhanced ability to search the web and cross-check sources, the tool is not immune from the persistent tendency of AI language models to make things up or get it wrong. When MIT Technology Review tested the new search function and asked it for vacation destination ideas, ChatGPT suggested “luxury European destinations” such as Japan, Dubai, the Caribbean islands, Bali, the Seychelles, and Thailand. It offered as a source an article from the Times, a British newspaper, which listed these locations as well as those in Europe as luxury holiday options.

“Especially when you ask about untrue facts or events that never happened, the engine might still try to formulate a plausible response that is not necessarily correct,” says Verberne. There is also a risk that misinformation might seep into ChatGPT’s answers from the internet if the company has not filtered its sources well enough, she adds. 

Another risk is that the current push to access the web through AI search will disrupt the internet’s digital economy, argues Benjamin Brooks, a fellow at Harvard University’s Berkman Klein Center, who previously led public policy for Stability AI, in an op-ed published by MIT Technology Review today.

“By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and ‘eyeballs’ they need to survive,” Brooks writes.

A Note from the Editor

What are we going to eat? It is the eternal question. We humans have been asking ourselves this for as long as we have been human. The question itself can be tedious, exciting, urgent, or desperate, depending on who is asking and where. There are many parts of the world where there is no answer. 

Famine is a critical issue in Gaza, Sudan, Syria, Myanmar, and Mali, among other places. As I write this, there are people going hungry tonight in western North Carolina because of the unprecedented flooding brought on by the aftermath of Hurricane Helene. And even when hunger isn’t an acute issue, it can remain  a persistently chronic one. Some 2.3 billion people around the world suffer from food insecurity, according to the World Health Organization. In the United States alone, the USDA has found that more than 47 million people live in food-insecure households. 

This issue is all about food, and more to the point, how we can use technology—high tech and low tech—to feed more people. 

Jonathan W. Rosen explores how some in Africa are tackling hunger by reviving nearly forgotten indigenous crops. These crops are often more resilient to climate change and better suited for the region than some of the more traditional ones embraced by agribusiness. Developing and promoting them could help combat food insecurity across the continent. But as is the case with many such initiatives, a lot hinges on sufficient investment and attention. 

At the high-tech end of the spectrum, Claire L. Evans looks into the startups seeking to create food literally out of thin air. In work based in part on decades-old NASA research, a new generation of researchers is developing carbon-hungry bacteria that will munch on greenhouse gases and grow into edible foodstuff. Yum? 

David W. Brown takes us to Mars—or a small simulacrum of it. If we are ever to spend any time on Mars, we’re going to need to grow our own food there. But there’s a problem. Well, there are a lot of problems! The soil is poisonous, for starters. And we don’t actually have any of it here to experiment with. But if the effort to make that soil arable pays off, it could not only help us bring life to Mars—it could also help support life here on Earth, converting deserts and poisoned wastelands into farmland.

As a reminder that technology is not always the answer, Douglas Main’s cover story takes on the issue of herbicide-resistant weeds. In the past few decades, more and more plants have evolved to develop this type of resistance. Even glyphosate—the chemical in Monsanto’s Roundup, which was initially marketed as being impervious to resistance—has been outpaced by some superweeds in the last 20 years. And the problem is just, well, growing. Nicola Twilley’s research on artificial refrigeration also reveals how technological advances can sometimes harm our food supply even as they help advance it. In our Q&A with her, she explains how the refrigerator has made food safer and more convenient—but at a huge cost in environmental damage (and flavor).

You won’t find only stories on food in this issue. Anna Merlan describes how the new face of AIDS denialism grew out of the choose-your-own-science school of covid vaccine trutherism—and how that movement basically threatens all of public health. Betsy Mason covers fascinating experiments in animal behavior—did you know that sleepy bees are less productive? And from Paolo Bacigalupi we have a new short story I have not stopped thinking about since I first read it. I hope you love it too. 

The coolest thing about smart glasses is not the AR. It’s the AI.

This article is from The Debrief with Mat Honan, MIT Technology Review‘s weekly newsletter from its editor in chief. To receive it every Friday, sign up here.

In case you missed the memo, we are barreling toward the next big consumer device category: smart glasses. At its developer conference last week, Meta (née Facebook) introduced a positively mind-blowing new set of augmented-reality (AR) glasses dubbed Orion.  Snap unveiled its new Snap Spectacles last week. Back in June at Google IO, that company teased a pair. Apple is rumored to be working on its own model as well. Whew.

Both Meta and Snap have now put their glasses in the hands of (or maybe on the faces of) reporters. And both have proved that after years of promise, AR specs are at last A Thing. But what’s really interesting about all this to me isn’t AR at all. It’s AI.

Take Meta’s new glasses. They are still just a prototype, as the cost to build them—reportedly $10,000—is so high. But the company showed them off anyway this week, awing basically everyone who got to try them out. The holographic functions look very cool. The gesture controls also appear to function really well. And possibly best of all, they look more or less like normal, if chunky, glasses. (Caveat that I may have a different definition of normal-looking glasses from most people. ) If you want to learn more about their features, Alex Heath has a great hands-on write-up in The Verge.

But what’s so intriguing to me about all this is the way smart glasses enable you to seamlessly interact with AI as you go about your day. I think that’s going to be a lot more useful than viewing digital objects in physical spaces. Put more simply: It’s not about the visual effects. It’s about the brains.

Today if you want to ask a question of ChatGPT or Google’s Gemini or what have you, you pretty much have to use your phone or laptop to do it. Sure, you can use your voice, but it still needs that device as an anchor. That’s especially true if you have a question about something you see—you’re going to need the smartphone camera for that. Meta has already pulled ahead here by letting people interact with its AI via its Ray-Ban Meta smart glasses. It’s liberating to be freed from the tether of the screen. Frankly, staring at a screen kinda sucks.

That’s why when I tried Snap’s new Spectacles a couple of weeks ago, I was less taken by the ability to simulate a golf green in the living room than I was with the way I could look out on the horizon, ask Snap’s AI agent about the tall ship I saw in the distance, and have it not only identify it but give me a brief description of it. Similarly, in The Verge Heath notes that the most impressive part of Meta’s Orion demo was when he looked at a set of ingredients and the glasses told him what they were and how to make a smoothie out of them.

The killer feature of Orion or other glasses won’t be AR Ping-Pong games—batting an invisible ball around with the palm of your hand is just goofy. But the ability to use multimodal AI to better understand, interact with, and just get more out of the world around you without getting sucked into a screen? That’s amazing.

And really, that’s always been the appeal. At least to me. Back in 2013, when I was writing about Google Glass, what was most revolutionary about that extremely nascent face computer was its ability to offer up relevant,  contextual information using Google Now (at the time the company’s answer to Apple’s Siri) in a way that bypassed my phone.

While I had mixed feelings about Glass overall, I argued, “You are so going to love Google Now for your face.” I still think that’s true.

Assistants that help you accomplish things in the world, without having to be given complicated instructions or making you interface with a screen at all, are going to usher in a new wave of computing. While Google’s Project Astra demo, a still unreleased AI agent that it showed off this summer, was wild on a phone, it was not until Astra ran on a pair of smart glasses that things really fired up.

Years ago, I had a spox from Magic Leap, an early company working on AR headsets, try to convince me that leaving virtual objects, like a digital bouquet of flowers, around in physical spaces for others to find would be cool. Okay … sure. And yeah, Pokémon Go was hugely popular. But it has taken generative AI, not AR gimmicks, to really make smart glasses make sense.

Multimodal AI that can understand speech, video, images, and text, combined with glasses that let it see what you see and hear what you hear, will redefine the way we interact with the world every bit as much as the smartphone did.

Finally, a weird aside: Orion was the great huntsman of Greek mythology. (And, of course, is the constellation you see up in the sky.) There are lots of versions of his story, but a common one is that the king of Chios blinded him after Orion drunkenly raped the king’s daughter.  He eventually regained his vision by looking into the rising sun.

It’s a dramatic story, but maybe not the best product name for a pair of glasses.

Here’s what I made of Snap’s new augmented-reality Spectacles

Before I get to Snap’s new Spectacles, a confession: I have a long history of putting goofy new things on my face and liking it. Back in 2011, I tried on Sony’s head-mounted 3D glasses and, apparently, enjoyed them. Sort of. At the beginning of 2013, I was enamored with a Kickstarter project I saw at CES called Oculus Rift. I then spent the better part of the year with Google’s ridiculous Glass on my face and thought it was the future. Microsoft HoloLens? Loved it. Google Cardboard? Totally normal. Apple Vision Pro? A breakthrough, baby. 

Anyway. Snap announced a new version of its Spectacles today. These are AR glasses that could finally deliver on the promises devices like Magic Leap, or HoloLens, or even Google Glass, made many years ago. I got to try them out a couple of weeks ago. They are pretty great! (But also: See above)

These fifth-generation Spectacles can display visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world. The interface is powered by the company’s new operating system, Snap OS. Unlike typical VR headsets or spatial computing devices, these augmented-reality (AR) lenses don’t obscure your vision and re-create it with cameras. There is no screen covering your field of view. Instead, images appear to float and exist in three dimensions in the world around you, hovering in the air or resting on tables and floors.

Snap CTO Bobby Murphy described the intended result to MIT Technology Review as “computing overlaid on the world that enhances our experience of the people in the places that are around us, rather than isolating us or taking us out of that experience.” 

In my demo, I was able to stack Lego pieces on a table, smack an AR golf ball into a hole across the room (at least a triple bogey), paint flowers and vines across the ceilings and walls using my hands, and ask questions about the objects I was looking at and receive answers from Snap’s virtual AI chatbot. There was even a little purple virtual doglike creature from Niantic, a Peridot, that followed me around the room and outside onto a balcony. 

But look up from the table and you see a normal room. The golf ball is on the floor, not a virtual golf course. The Peridot perches on a real balcony railing. Crucially, this means you can maintain contact—including eye contact—with the people around you in the room. 

To accomplish all this, Snap packed a lot of tech into the frames. There are two processors embedded inside, so all the compute happens in the glasses themselves. Cooling chambers in the sides did an effective job of dissipating heat in my demo. Four cameras capture the world around you, as well as the movement of your hands for gesture tracking. The images are displayed via micro-projectors, similar to those found in pico projectors, that do a nice job of presenting those three-dimensional images right in front of your eyes without requiring a lot of initial setup. It creates a tall, deep field of view—Snap claims it is similar to a 100-inch display at 10 feet—in a relatively small, lightweight device (226 grams). What’s more, they automatically darken when you step outside, so they work well not just in your home but out in the world.

You control all this with a combination of voice and hand gestures, most of which came pretty naturally to me. You can pinch to select objects and drag them around, for example. The AI chatbot could respond to questions posed in natural language (“What’s that ship I see in the distance?”). Some of the interactions require a phone, but for the most part Spectacles are a standalone device. 

It doesn’t come cheap. Snap isn’t selling the glasses directly to consumers but requires you to agree to at least one year of paying $99 per month for a Spectacles Developer Program account that gives you access to them. I was assured that the company has a very open definition of who can develop for the platform. Snap also announced a new partnership with OpenAI that takes advantage of its multimodal capabilities, which it says will help developers create experiences with real-world context about the things people see or hear (or say).

The author of the post standing outside wearing oversize Snap Spectacles. The photo is a bit goofy
It me.

Having said that, it all worked together impressively well. The three-dimensional objects maintained a sense of permanence in the spaces where you placed them—meaning you can move around and they stay put. The AI assistant correctly identified everything I asked it to. There were some glitches here and there—Lego bricks collapsing into each other, for example—but for the most part this was a solid little device. 

It is not, however, a low-profile one. No one will mistake these for a normal pair of glasses or sunglasses. A colleague described them as beefed-up 3D glasses, which seems about right. They are not the silliest computer I have put on my face, but they didn’t exactly make me feel like a cool guy, either. Here’s a photo of me trying them out. Draw your own conclusions.

❌