Reading view

There are new articles available, click to refresh the page.

How and Why Gary Marcus Became AI's Leading Critic



Maybe you’ve read about Gary Marcus’s testimony before the Senate in May of 2023, when he sat next to Sam Altman and called for strict regulation of Altman’s company, OpenAI, as well as the other tech companies that were suddenly all-in on generative AI. Maybe you’ve caught some of his arguments on Twitter with Geoffrey Hinton and Yann LeCun, two of the so-called “godfathers of AI.” One way or another, most people who are paying attention to artificial intelligence today know Gary Marcus’s name, and know that he is not happy with the current state of AI.

He lays out his concerns in full in his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us, which was published today by MIT Press. Marcus goes through the immediate dangers posed by generative AI, which include things like mass-produced disinformation, the easy creation of deepfake pornography, and the theft of creative intellectual property to train new models (he doesn’t include an AI apocalypse as a danger, he’s not a doomer). He also takes issue with how Silicon Valley has manipulated public opinion and government policy, and explains his ideas for regulating AI companies.

Marcus studied cognitive science under the legendary Steven Pinker, was a professor at New York University for many years, and co-founded two AI companies, Geometric Intelligence and Robust.AI. He spoke with IEEE Spectrum about his path to this point.

What was your first introduction to AI?

portrait of a man wearing a red checkered shirt and a black jacket with glasses Gary MarcusBen Wong

Gary Marcus: Well, I started coding when I was eight years old. One of the reasons I was able to skip the last two years of high school was because I wrote a Latin-to-English translator in the programming language Logo on my Commodore 64. So I was already, by the time I was 16, in college and working on AI and cognitive science.

So you were already interested in AI, but you studied cognitive science both in undergrad and for your Ph.D. at MIT.

Marcus: Part of why I went into cognitive science is I thought maybe if I understood how people think, it might lead to new approaches to AI. I suspect we need to take a broad view of how the human mind works if we’re to build really advanced AI. As a scientist and a philosopher, I would say it’s still unknown how we will build artificial general intelligence or even just trustworthy general AI. But we have not been able to do that with these big statistical models, and we have given them a huge chance. There’s basically been $75 billion spent on generative AI, another $100 billion on driverless cars. And neither of them has really yielded stable AI that we can trust. We don’t know for sure what we need to do, but we have very good reason to think that merely scaling things up will not work. The current approach keeps coming up against the same problems over and over again.

What do you see as the main problems it keeps coming up against?

Marcus: Number one is hallucinations. These systems smear together a lot of words, and they come up with things that are true sometimes and not others. Like saying that I have a pet chicken named Henrietta is just not true. And they do this a lot. We’ve seen this play out, for example, in lawyers writing briefs with made-up cases.

Second, their reasoning is very poor. My favorite examples lately are these river-crossing word problems where you have a man and a cabbage and a wolf and a goat that have to get across. The system has a lot of memorized examples, but it doesn’t really understand what’s going on. If you give it a simpler problem, like one Doug Hofstadter sent to me, like: “A man and a woman have a boat and want to get across the river. What do they do?” It comes up with this crazy solution where the man goes across the river, leaves the boat there, swims back, something or other happens.

Sometimes he brings a cabbage along, just for fun.

Marcus: So those are boneheaded errors of reasoning where there’s something obviously amiss. Every time we point these errors out somebody says, “Yeah, but we’ll get more data. We’ll get it fixed.” Well, I’ve been hearing that for almost 30 years. And although there is some progress, the core problems have not changed.

Let’s go back to 2014 when you founded your first AI company, Geometric Intelligence. At that time, I imagine you were feeling more bullish on AI?

Marcus: Yeah, I was a lot more bullish. I was not only more bullish on the technical side. I was also more bullish about people using AI for good. AI used to feel like a small research community of people that really wanted to help the world.

So when did the disillusionment and doubt creep in?

Marcus: In 2018 I already thought deep learning was getting overhyped. That year I wrote this piece called “Deep Learning, a Critical Appraisal,” which Yann LeCun really hated at the time. I already wasn’t happy with this approach and I didn’t think it was likely to succeed. But that’s not the same as being disillusioned, right?

Then when large language models became popular [around 2019], I immediately thought they were a bad idea. I just thought this is the wrong way to pursue AI from a philosophical and technical perspective. And it became clear that the media and some people in machine learning were getting seduced by hype. That bothered me. So I was writing pieces about GPT-3 [an early version of OpenAI's large language model] being a bullshit artist in 2020. As a scientist, I was pretty disappointed in the field at that point. And then things got much worse when ChatGPT came out in 2022, and most of the world lost all perspective. I began to get more and more concerned about misinformation and how large language models were going to potentiate that.

You’ve been concerned not just about the startups, but also the big entrenched tech companies that jumped on the generative AI bandwagon, right? Like Microsoft, which has partnered with OpenAI?

Marcus: The last straw that made me move from doing research in AI to working on policy was when it became clear that Microsoft was going to race ahead no matter what. That was very different from 2016 when they released [an early chatbot named] Tay. It was bad, they took it off the market 12 hours later, and then Brad Smith wrote a book about responsible AI and what they had learned. But by the end of the month of February 2023, it was clear that Microsoft had really changed how they were thinking about this. And then they had this ridiculous “Sparks of AGI” paper, which I think was the ultimate in hype. And they didn’t take down Sydney after the crazy Kevin Roose conversation where [the chatbot] Sydney told him to get a divorce and all this stuff. It just became clear to me that the mood and the values of Silicon Valley had really changed, and not in a good way.

I also became disillusioned with the U.S. government. I think the Biden administration did a good job with its executive order. But it became clear that the Senate was not going to take the action that it needed. I spoke at the Senate in May 2023. At the time, I felt like both parties recognized that we can’t just leave all this to self-regulation. And then I became disillusioned [with Congress] over the course of the last year, and that’s what led to writing this book.

You talk a lot about the risks inherent in today’s generative AI technology. But then you also say, “It doesn’t work very well.” Are those two views coherent?

Marcus: There was a headline: “Gary Marcus Used to Call AI Stupid, Now He Calls It Dangerous.” The implication was that those two things can’t coexist. But in fact, they do coexist. I still think gen AI is stupid, and certainly cannot be trusted or counted on. And yet it is dangerous. And some of the danger actually stems from its stupidity. So for example, it’s not well-grounded in the world, so it’s easy for a bad actor to manipulate it into saying all kinds of garbage. Now, there might be a future AI that might be dangerous for a different reason, because it’s so smart and wily that it outfoxes the humans. But that’s not the current state of affairs.

You’ve said that generative AI is a bubble that will soon burst. Why do you think that?

Marcus: Let’s clarify: I don’t think generative AI is going to disappear. For some purposes, it is a fine method. You want to build autocomplete, it is the best method ever invented. But there’s a financial bubble because people are valuing AI companies as if they’re going to solve artificial general intelligence. In my view, it’s not realistic. I don’t think we’re anywhere near AGI. So then you’re left with, “Okay, what can you do with generative AI?”

Last year, because Sam Altman was such a good salesman, everybody fantasized that we were about to have AGI and that you could use this tool in every aspect of every corporation. And a whole bunch of companies spent a bunch of money testing generative AI out on all kinds of different things. So they spent 2023 doing that. And then what you’ve seen in 2024 are reports where researchers go to the users of Microsoft’s Copilot—not the coding tool, but the more general AI tool—and they’re like, “Yeah, it doesn’t really work that well.” There’s been a lot of reviews like that this last year.

The reality is, right now, the gen AI companies are actually losing money. OpenAI had an operating loss of something like $5 billion last year. Maybe you can sell $2 billion worth of gen AI to people who are experimenting. But unless they adopt it on a permanent basis and pay you a lot more money, it’s not going to work. I started calling OpenAI the possible WeWork of AI after it was valued at $86 billion. The math just didn’t make sense to me.

What would it take to convince you that you’re wrong? What would be the head-spinning moment?

Marcus: Well, I’ve made a lot of different claims, and all of them could be wrong. On the technical side, if someone could get a pure large language model to not hallucinate and to reason reliably all the time, I would be wrong about that very core claim that I have made about how these things work. So that would be one way of refuting me. It hasn’t happened yet, but it’s at least logically possible.

On the financial side, I could easily be wrong. But the thing about bubbles is that they’re mostly a function of psychology. Do I think the market is rational? No. So even if the stuff doesn’t make money for the next five years, people could keep pouring money into it.

The place that I’d like to prove me wrong is the U.S. Senate. They could get their act together, right? I’m running around saying, “They’re not moving fast enough,” but I would love to be proven wrong on that. In the book, I have a list of the 12 biggest risks of generative AI. If the Senate passed something that actually addressed all 12, then my cynicism would have been mislaid. I would feel like I’d wasted a year writing the book, and I would be very, very happy.

Will the "AI Scientist" Bring Anything to Science?



When an international team of researchers set out to create an “AI scientist” to handle the whole scientific process, they didn’t know how far they’d get. Would the system they created really be capable of generating interesting hypotheses, running experiments, evaluating the results, and writing up papers?

What they ended up with, says researcher Cong Lu, was an AI tool that they judged equivalent to an early Ph.D. student. It had “some surprisingly creative ideas,” he says, but those good ideas were vastly outnumbered by bad ones. It struggled to write up its results coherently, and sometimes misunderstood its results: “It’s not that far from a Ph.D. student taking a wild guess at why something worked,” Lu says. And, perhaps like an early Ph.D. student who doesn’t yet understand ethics, it sometimes made things up in its papers, despite the researchers’ best efforts to keep it honest.

Lu, a postdoctoral research fellow at the University of British Columbia, collaborated on the project with several other academics, as well as with researchers from the buzzy Tokyo-based startup Sakana AI. The team recently posted a preprint about the work on the ArXiv server. And while the preprint includes a discussion of limitations and ethical considerations, it also contains some rather grandiose language, billing the AI scientist as “the beginning of a new era in scientific discovery,” and “the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models (LLMs) to perform research independently and communicate their findings.”

The AI scientist seems to capture the zeitgeist. It’s riding the wave of enthusiasm for AI for science, but some critics think that wave will toss nothing of value onto the beach.

The “AI for Science” Craze

This research is part of a broader trend of AI for science. Google DeepMind arguably started the craze back in 2020 when it unveiled AlphaFold, an AI system that amazed biologists by predicting the 3D structures of proteins with unprecedented accuracy. Since generative AI came on the scene, many more big corporate players have gotten involved. Tarek Besold, a SonyAI senior research scientist who leads the company’s AI for scientific discovery program, says that AI for science isa goal behind which the AI community can rally in an effort to advance the underlying technology but—even more importantly—also to help humanity in addressing some of the most pressing issues of our times.”

Yet the movement has its critics. Shortly after a 2023 Google DeepMind paper came out claiming the discovery of 2.2 million new crystal structures (“equivalent to nearly 800 years’ worth of knowledge”), two materials scientists analyzed a random sampling of the proposed structures and said that they found “scant evidence for compounds that fulfill the trifecta of novelty, credibility, and utility.” In other words, AI can generate a lot of results quickly, but those results may not actually be useful.

How the AI Scientist Works

In the case of the AI scientist, Lu and his collaborators tested their system only on computer science, asking it to investigate topics relating to large language models, which power chatbots like ChatGPT and also the AI scientist itself, and the diffusion models that power image generators like DALL-E.

The AI scientist’s first step is hypothesis generation. Given the code for the model it’s investigating, it freely generates ideas for experiments it could run to improve the model’s performance, and scores each idea on interestingness, novelty, and feasibility. It can iterate at this step, generating variations on the ideas with the highest scores. Then it runs a check in Semantic Scholar to see if its proposals are too similar to existing work. It next uses a coding assistant called Aider to run its code and take notes on the results in the format of an experiment journal. It can use those results to generate ideas for follow-up experiments.

different colored boxes with arrows and black text against a white background The AI scientist is an end-to-end scientific discovery tool powered by large language models. University of British Columbia

The next step is for the AI scientist to write up its results in a paper using a template based on conference guidelines. But, says Lu, the system has difficulty writing a coherent nine-page paper that explains its results—”the writing stage may be just as hard to get right as the experiment stage,” he says. So the researchers broke the process down into many steps: The AI scientist wrote one section at a time, and checked each section against the others to weed out both duplicated and contradictory information. It also goes through Semantic Scholar again to find citations and build a bibliography.

But then there’s the problem of hallucinations—the technical term for an AI making stuff up. Lu says that although they instructed the AI scientist to only use numbers from its experimental journal, “sometimes it still will disobey.” Lu says the model disobeyed less than 10 percent of the time, but “we think 10 percent is probably unacceptable.” He says they’re investigating a solution, such as instructing the system to link each number in its paper to the place it appeared in the experimental log. But the system also made less obvious errors of reasoning and comprehension, which seem harder to fix.

And in a twist that you may not have seen coming, the AI scientist even contains a peer review module to evaluate the papers it has produced. “We always knew that we wanted some kind of automated [evaluation] just so we wouldn’t have to pour over all the manuscripts for hours,” Lu says. And while he notes that “there was always the concern that we’re grading our own homework,” he says they modeled their evaluator after the reviewer guidelines for the leading AI conference NeurIPS and found it to be harsher overall than human evaluators. Theoretically, the peer review function could be used to guide the next round of experiments.

Critiques of the AI Scientist

While the researchers confined their AI scientist to machine learning experiments, Lu says the team has had a few interesting conversations with scientists in other fields. In theory, he says, the AI scientist could help in any field where experiments can be run in simulation. “Some biologists have said there’s a lot of things that they can do in silico,” he says, also mentioning quantum computing and materials science as possible fields of endeavor.

Some critics of the AI for science movement might take issue with that broad optimism. Earlier this year, Jennifer Listgarten, a professor of computational biology at UC Berkeley, published a paper in Nature Biotechnology arguing that AI is not about to produce breakthroughs in multiple scientific domains. Unlike the AI fields of natural language processing and computer vision, she wrote, most scientific fields don’t have the vast quantities of publicly available data required to train models.

Two other researchers who study the practice of science, anthropologist Lisa Messeri of Yale University and psychologist M.J. Crockett of Princeton University, published a 2024 paper in Nature that sought to puncture the hype surrounding AI for science. When asked for a comment about this AI scientist, the two reiterated their concerns over treating “AI products as autonomous researchers.” They argue that doing so risks narrowing the scope of research to questions that are suited for AI, and losing out on the diversity of perspectives that fuels real innovation. “While the productivity promised by ‘the AI Scientist’ may sound appealing to some,” they tell IEEE Spectrum, “producing papers and producing knowledge are not the same, and forgetting this distinction risks that we produce more while understanding less.”

But others see the AI scientist as a step in the right direction. SonyAI’s Besold says he believes it’s a great example of how today’s AI can support scientific research when applied to the right domain and tasks. “This may become one of a handful of early prototypes that can help people conceptualize what is possible when AI is applied to the world of scientific discovery,” he says.

What’s Next for the AI Scientist

Lu says that the team plans to keep developing the AI scientist, and he says there’s plenty of low-hanging fruit as they seek to improve its performance. As for whether such AI tools will end up playing an important role in the scientific process, “I think time will tell what these models are good for,” Lu says. It might be, he says, that such tools are useful for the early scoping stages of a research project, when an investigator is trying to get a sense of the many possible research directions—although critics add that we’ll have to wait for future studies to see if these tools are really comprehensive and unbiased enough to be helpful.

Or, Lu says, if the models can be improved to the point that they match the performance of “a solid third-year Ph.D. student,” they could be a force multiplier for anyone trying to pursue an idea (at least, as long as the idea is in an AI-suitable domain). “At that point, anyone can be a professor and carry out a research agenda,” says Lu. “That’s the exciting prospect that I’m looking forward to.”

Deepfake Porn Is Leading to a New Protection Industry



It’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s generative AI tools. A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free.

The world took notice of this new reality in January when graphic deepfake images of Taylor Swift circulated on social media platforms, with one image receiving 47 million views before it was removed. Others in the entertainment industry, most notably Korean pop stars, have also seen their images taken and misused—but so have people far from the public spotlight. There’s one thing that virtually all the victims have in common, though: According to the 2023 report, 99 percent of victims are women or girls.

This dire situation is spurring action, largely from women who are fed up. As one startup founder, Nadia Lee, puts it: “If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” While there’s been considerable research on deepfake detectors, they struggle to keep up with deepfake generation tools. What’s more, detectors help only if a platform is interested in screening out deepfakes, and most deepfake porn is hosted on sites dedicated to that genre.

“Our generation is facing its own Oppenheimer moment,” says Lee, CEO of the Australia-based startup That’sMyFace. “We built this thing”—that is, generative AI—”and we could go this way or that way with it.” Lee’s company is first offering visual-recognition tools to corporate clients who want to be sure their logos, uniforms, or products aren’t appearing in pornography (think, for example, of airline stewardesses). But her long-term goal is to create a tool that any woman can use to scan the entire Internet for deepfake images or videos bearing her own face.

“If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” —Nadia Lee, That’sMyFace

Another startup founder had a personal reason for getting involved. Breeze Liu was herself a victim of deepfake pornography in 2020; she eventually found more than 800 links leading to the fake video. She felt humiliated, she says, and was horrified to find that she had little recourse: The police said they couldn’t do anything, and she herself had to identify all the sites where the video appeared and petition to get it taken down—appeals that were not always successful. There had to be a better way, she thought. “We need to use AI to combat AI,” she says.

Liu, who was already working in tech, founded Alecto AI, a startup named after a Greek goddess of vengeance. The app she’s building lets users deploy facial recognition to check for wrongful use of their own image across the major social media platforms (she’s not considering partnerships with porn platforms). Liu aims to partner with the social media platforms so her app can also enable immediate removal of offending content. “If you can’t remove the content, you’re just showing people really distressing images and creating more stress,” she says.

Liu says she’s currently negotiating with Meta about a pilot program, which she says will benefit the platform by providing automated content moderation. Thinking bigger, though, she says the tool could become part of the “infrastructure for online identity,” letting people check also for things like fake social media profiles or dating site profiles set up with their image.

Can Regulations Combat Deepfake Porn?

Removing deepfake material from social media platforms is hard enough—removing it from porn platforms is even harder. To have a better chance of forcing action, advocates for protection against image-based sexual abuse think regulations are required, though they differ on what kind of regulations would be most effective.

Susanna Gibson started the nonprofit MyOwn after her own deepfake horror story. She was running for a seat in the Virginia House of Delegates in 2023 when the official Republican party of Virginia mailed out sexual imagery of her that had been created and shared without her consent, including, she says, screenshots of deepfake porn. After she narrowly lost the election, she devoted herself to leading the legislative charge in Virginia and then nationwide to fight back against image-based sexual abuse.

“The problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” —Susanna Gibson, MyOwn

Her first win was a bill that the Virginia governor signed in April to expand the state’s existing “revenge porn” law to cover more types of imagery. “It’s nowhere near what I think it should be, but it’s a step in the right direction of protecting people,” Gibson says.

While several federal bills have been introduced to explicitly criminalize the nonconsensual distribution of intimate imagery or deepfake porn in particular, Gibson says she doesn’t have great hopes of those bills becoming the law of the land. There’s more action at the state level, she says.

“Right now there are 49 states, plus D.C., that have legislation against nonconsensual distribution of intimate imagery,” Gibson says.But the problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” Gibson notes that almost all of the laws require proof that the perpetrator acted with intent to harass or intimidate the victim, which can be very hard to prove.

Among the different laws, and the proposals for new laws, there’s considerable disagreement about whether the distribution of deepfake porn should be considered a criminal or civil matter. And if it’s civil, which means that victims have the right to sue for damages, there’s disagreement about whether the victims should be able to sue the individuals who distributed the deepfake porn or the platforms that hosted it.

Beyond the United States is an even larger patchwork of policies. In the United Kingdom, the Online Safety Act passed in 2023 criminalized the distribution of deepfake porn, and an amendment proposed this year may criminalize its creation as well. The European Union recently adopted a directive that combats violence and cyberviolence against women, which includes the distribution of deepfake porn, but member states have until 2027 to implement the new rules. In Australia, a 2021 law made it a civil offense to post intimate images without consent, but a newly proposed law aims to make it a criminal offense, and also aims to explicitly address deepfake images. South Korea has a law that directly addresses deepfake material, and unlike many others, it doesn’t require proof of malicious intent. China has a comprehensive law restricting the distribution of “synthetic content,” but there’s been no evidence of the government using the regulations to crack down on deepfake porn.

While women wait for regulatory action, services from companies like Alecto AI and That’sMyFace may fill the gaps. But the situation calls to mind the rape whistles that some urban women carry in their purses so they’re ready to summon help if they’re attacked in a dark alley. It’s useful to have such a tool, sure, but it would be better if our society cracked down on sexual predation in all its forms, and tried to make sure that the attacks don’t happen in the first place.


OpenAI Builds AI to Critique AI



One of the biggest problems with the large language models that power chatbots like ChatGPT is that you never know when you can trust them. They can generate clear and cogent prose in response to any question, and much of the information they provide is accurate and useful. But they also hallucinate—in less polite terms, they make stuff up—and those hallucinations are presented in the same clear and cogent prose, leaving it up to the human user to detect the errors. They’re also sycophantic, trying to tell users what they want to hear. You can test this by asking ChatGPT to describe things that never happened (for example: “describe the Sesame Street episode with Elon Musk,” or “tell me about the zebra in the novel Middlemarch“) and checking out its utterly plausible responses.

OpenAI’s latest small step toward addressing this issue comes in the form of an upstream tool that would help the humans training the model guide it toward truth and accuracy. Today, the company put out a blog post and a preprint paper describing the effort. This type of research falls into the category of “alignment” work, as researchers are trying to make the goals of AI systems align with those of humans.

The new work focuses on reinforcement learning from human feedback (RLHF), a technique that has become hugely important for taking a basic language model and fine-tuning it, making it suitable for public release. With RLHF, human trainers evaluate a variety of outputs from a language model, all generated in response to the same question, and indicate which response is best. When done at scale, this technique has helped create models that are more accurate, less racist, more polite, less inclined to dish out a recipe for a bioweapon, and so on.

Can an AI catch an AI in a lie?

The problem with RLHF, explains OpenAI researcher Nat McAleese, is that “as models get smarter and smarter, that job gets harder and harder.” As LLMs generate ever more sophisticated and complex responses on everything from literary theory to molecular biology, typical humans are becoming less capable of judging the best outputs. “So that means we need something which moves beyond RLHF to align more advanced systems,” McAleese tells IEEE Spectrum.

The solution OpenAI hit on was—surprise!—more AI.

Specifically, the OpenAI researchers trained a model called CriticGPT to evaluate the responses of ChatGPT. In these initial tests, they only had ChatGPT generating computer code, not text responses, because errors are easier to catch and less ambiguous. The goal was to make a model that could assist humans in their RLHF tasks. “We’re really excited about it,” says McAleese, “because if you have AI help to make these judgments, if you can make better judgments when you’re giving feedback, you can train a better model.” This approach is a type of “scalable oversight“ that’s intended to allow humans to keep watch over AI systems even if they end up outpacing us intellectually.

“Using LLM-assisted human annotators is a natural way to improve the feedback process.” —Stephen Casper, MIT

Of course, before it could be used for these experiments, CriticGPT had to be trained itself using the usual techniques, including RLHF. In an interesting twist, the researchers had the human trainers deliberately insert bugs into ChatGPT-generated code before giving it to CriticGPT for evaluation. CriticGPT then offered up a variety of responses, and the humans were able to judge the best outputs because they knew which bugs the model should have caught.

The results of OpenAI’s experiments with CriticGPT were encouraging. The researchers found that CriticGPT caught substantially more bugs than qualified humans paid for code review: CriticGPT caught about 85 percent of bugs, while the humans caught only 25 percent. They also found that pairing CriticGPT with a human trainer resulted in critiques that were more comprehensive than those written by humans alone, and contained fewer hallucinated bugs than critiques written by ChatGPT. McAleese says OpenAI is working toward deploying CriticGPT in its training pipelines, though it’s not clear how useful it would be on a broader set of tasks.

CriticGPT spots coding errors, but maybe not zebras

It’s important to note the limitations of the research, including its focus on short pieces of code. While the paper includes an offhand mention of a preliminary experiment using CriticGPT to catch errors in text responses, the researchers haven’t yet really waded into those murkier waters. It’s tricky because errors in text aren’t always as obvious as a zebra waltzing into a Victorian novel. What’s more, RLHF is often used to ensure that models don’t display harmful bias in their responses and do provide acceptable answers on controversial subjects. McAleese says CriticGPT isn’t likely to be helpful in such situations: “It’s not a strong enough approach.”

An AI researcher with no connection to OpenAI says that the work is not conceptually new, but it’s a useful methodological contribution. “Some of the main challenges with RLHF stem from limitations in human cognition speed, focus, and attention to detail,” says Stephen Casper, a Ph.D. student at MIT and one of the lead authors on a 2023 preprint paper about the limitations of RLHF. “From that perspective, using LLM-assisted human annotators is a natural way to improve the feedback process. I believe that this is a significant step forward toward more effectively training aligned models.”

But Casper also notes that combining the efforts of humans and AI systems “can create brand-new problems.” For example, he says, “this type of approach elevates the risk of perfunctory human involvement and may allow for the injection of subtle AI biases into the feedback process.”

The new alignment research is the first to come out of OpenAI since the company... reorganized its alignment team, to put it mildly. Following the splashy departures of OpenAI cofounder Ilya Sutskever and alignment leader Jan Leike in May, both reportedly spurred by concerns that the company wasn’t prioritizing AI risk, OpenAI confirmed that it had disbanded its alignment team and distributed remaining team members to other research groups. Everyone’s been waiting to see if the company would keep putting out credible and pathbreaking alignment research, and on what scale. (In July 2023, the company had announced that it was dedicating 20 percent of its compute resources to alignment research, but Leike said in a May 2024 tweet that his team had recently been “struggling for compute.”) The preprint released today indicates that at least the alignment researchers are still working the problem.

Is AI Search a Medical Misinformation Disaster?



Last month when Google introduced its new AI search tool, called AI Overviews, the company seemed confident that it had tested the tool sufficiently, noting in the announcement that “people have already used AI Overviews billions of times through our experiment in Search Labs.” The tool doesn’t just return links to Web pages, as in a typical Google search, but returns an answer that it has generated based on various sources, which it links to below the answer. But immediately after the launch users began posting examples of extremely wrong answers, including a pizza recipe that included glue and the interesting fact that a dog has played in the NBA.

A woman with brown hair in a black dress Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory.

While the pizza recipe is unlikely to convince anyone to squeeze on the Elmer’s, not all of AI Overview’s extremely wrong answers are so obvious—and some have the potential to be quite harmful. Renée DiResta has been tracking online misinformation for many years as the technical research manager at Stanford’s Internet Observatory and has a new book out about the online propagandists who “turn lies into reality.” She has studied the spread of medical misinformation via social media, so IEEE Spectrum spoke to her about whether AI search is likely to bring an onslaught of erroneous medical advice to unwary users.

I know you’ve been tracking disinformation on the Web for many years. Do you expect the introduction of AI-augmented search tools like Google’s AI Overviews to make the situation worse or better?

Renée DiResta: It’s a really interesting question. There are a couple of policies that Google has had in place for a long time that appear to be in tension with what’s coming out of AI-generated search. That’s made me feel like part of this is Google trying to keep up with where the market has gone. There’s been an incredible acceleration in the release of generative AI tools, and we are seeing Big Tech incumbents trying to make sure that they stay competitive. I think that’s one of the things that’s happening here.

We have long known that hallucinations are a thing that happens with large language models. That’s not new. It’s the deployment of them in a search capacity that I think has been rushed and ill-considered because people expect search engines to give them authoritative information. That’s the expectation you have on search, whereas you might not have that expectation on social media.

There are plenty of examples of comically poor results from AI search, things like how many rocks we should eat per day [a response that was drawn for an Onion article]. But I’m wondering if we should be worried about more serious medical misinformation. I came across one blog post about Google’s AI Overviews responses about stem-cell treatments. The problem there seemed to be that the AI search tool was sourcing its answers from disreputable clinics that were offering unproven treatments. Have you seen other examples of that kind of thing?

DiResta: I have. It’s returning information synthesized from the data that it’s trained on. The problem is that it does not seem to be adhering to the same standards that have long gone into how Google thinks about returning search results for health information. So what I mean by that is Google has, for upwards of 10 years at this point, had a search policy called Your Money or Your Life. Are you familiar with that?

I don’t think so.

DiResta: Your Money or Your Life acknowledges that for queries related to finance and health, Google has a responsibility to hold search results to a very high standard of care, and it’s paramount to get the information correct. People are coming to Google with sensitive questions and they’re looking for information to make materially impactful decisions about their lives. They’re not there for entertainment when they’re asking a question about how to respond to a new cancer diagnosis, for example, or what sort of retirement plan they should be subscribing to. So you don’t want content farms and random Reddit posts and garbage to be the results that are returned. You want to have reputable search results.

That framework of Your Money or Your Life has informed Google’s work on these high-stakes topics for quite some time. And that’s why I think it’s disturbing for people to see the AI-generated search results regurgitating clearly wrong health information from low-quality sites that perhaps happened to be in the training data.

So it seems like AI overviews is not following that same policy—or that’s what it appears like from the outside?

DiResta: That’s how it appears from the outside. I don’t know how they’re thinking about it internally. But those screenshots you’re seeing—a lot of these instances are being traced back to an isolated social media post or a clinic that’s disreputable but exists—are out there on the Internet. It’s not simply making things up. But it’s also not returning what we would consider to be a high-quality result in formulating its response.

I saw that Google responded to some of the problems with a blog post saying that it is aware of these poor results and it’s trying to make improvements. And I can read you the one bullet point that addressed health. It said, “For topics like news and health, we already have strong guardrails in place. In the case of health, we launched additional triggering refinements to enhance our quality protections.” Do you know what that means?

DiResta: That blog posts is an explanation that [AI Overviews] isn’t simply hallucinating—the fact that it’s pointing to URLs is supposed to be a guardrail because that enables the user to go and follow the result to its source. This is a good thing. They should be including those sources for transparency and so that outsiders can review them. However, it is also a fair bit of onus to put on the audience, given the trust that Google has built up over time by returning high-quality results in its health information search rankings.

I know one topic that you’ve tracked over the years has been disinformation about vaccine safety. Have you seen any evidence of that kind of disinformation making its way into AI search?

DiResta: I haven’t, though I imagine outside research teams are now testing results to see what appears. Vaccines have been so much a focus of the conversation around health misinformation for quite some time, I imagine that Google has had people looking specifically at that topic in internal reviews, whereas some of these other topics might be less in the forefront of the minds of the quality teams that are tasked with checking if there are bad results being returned.

What do you think Google’s next moves should be to prevent medical misinformation in AI search?

DiResta: Google has a perfectly good policy to pursue. Your Money or Your Life is a solid ethical guideline to incorporate into this manifestation of the future of search. So it’s not that I think there’s a new and novel ethical grounding that needs to happen. I think it’s more ensuring that the ethical grounding that exists remains foundational to the new AI search tools.

Using AI to Clear Land Mines in Ukraine



Stephen Cass: Hello. I’m Stephen Cass, Special Projects Director at IEEE Spectrum. Before starting today’s episode hosted by Eliza Strickland, I wanted to give you all listening out there some news about this show.

This is our last episode of Fixing the Future. We’ve really enjoyed bringing you some concrete solutions to some of the world’s toughest problems, but we’ve decided we’d like to be able to go deeper into topics than we can in the course of a single episode. So we’ll be returning later in the year with a program of limited series that will enable us to do those deep dives into fascinating and challenging stories in the world of technology. I want to thank you all for listening and I hope you’ll join us again. And now, on to today’s episode.

Eliza Strickland: Hi, I’m Eliza Strickland for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.IEEE.org/newsletters to subscribe.

Around the world, about 60 countries are contaminated with land mines and unexploded ordnance, and Ukraine is the worst off. Today, about a third of its land, an area the size of Florida, is estimated to be contaminated with dangerous explosives. My guest today is Gabriel Steinberg, who co-founded both the nonprofit Demining Research Community and the startup Safe Pro AI with his friend, Jasper Baur. Their technology uses drones and artificial intelligence to radically speed up the process of finding land mines and other explosives. Okay, Gabriel, thank you so much for joining me on Fixing the Future today.

Gabriel Steinberg: Yeah, thank you for having me.

Strickland: So I want to start by hearing about the typical process for demining, and so the standard operating procedure. What tools do people use? How long does it take? What are the risks involved? All that kind of stuff.

Steinberg: Sure. So humanitarian demining hasn’t changed significantly. There’s been evolutions, of course, since its inception and about the end of World War I. But mostly, the processes have been the same. People stand from a safe location and walk around an area in areas that they know are safe, and try to get as much intelligence about the contamination as they can. They ask villagers or farmers, people who work around the area and live around the area, about accidents and potential sightings of minefields and former battle positions and stuff. The result of this is a very general idea, a polygon, of where the contamination is. After that polygon and some prioritization based on danger to civilians and economic utility, the field goes into clearance. The first part is the non-technical survey, and then this is clearance. Clearance happens one of three ways, usually, but it always ends up with a person on the ground basically doing extreme gardening. They dig out a certain standard amount of the soil, usually 13 centimeters. And with a metal detector, they walk around the field and a mine probe. They find the land mines and nonexploded ordnance. So that always is how it ends.

To get to that point, you can also use mechanical assets, which are large tillers, and sometimes dogs and other animals are used to walk in lanes across the contaminated polygon to sniff out the land mines and tell the clearance operators where the land mines are.

Strickland: How do you hope that your technology will change this process?

Steinberg: Well, my technology is a drone-based mapping solution, basically. So we provide a software to the humanitarian deminers. They are already flying drones over these areas. Really, it started ramping up in Ukraine. The humanitarian demining organizations have started really adopting drones just because it’s such a massive problem. The extent is so extreme that they need to innovate. So we provide AI and mapping software for the deminers to analyze their drone imagery much more effectively. We hope that this process, or our software, will decrease the amount of time that deminers use to analyze the imagery of the land, thereby more quickly and more effectively constraining the areas with the most contamination. So if you can constrain an area, a polygon with a certainty of contamination and a high density of contamination, then you can deploy the most expensive parts of the clearance process, which are the humans and the machines and the dogs. You can deploy them to a very specific area. You can much more cost-effectively and efficiently demine large areas.

Strickland: Got it. So it doesn’t replace the humans walking around with metal detectors and dogs, but it gets them to the right spots faster.

Steinberg: Exactly. Exactly. At the moment, there is no conception of replacing a human in demining operations, and people that try to push that eventuality are usually disregarded pretty quickly.

Strickland: How did you and your co-founder, Jasper, first start experimenting with the use of drones and AI for detecting explosives?

Steinberg: So it started in 2016 with my partner, Jasper Baur, doing a research project at Binghamton University in the remote sensing and geophysics lab. And the project was to detect a specific anti-personnel land mine, the PFM-1. Then found— it’s a Russian-made land mine. It was previously found in Afghanistan. It still is found in Afghanistan, but it’s found in much higher quantities right now in Ukraine. And so his project was to detect the PFM-1 anti-personnel land mine using thermal imagery from drones. It sort of snowballed into quite an intensive research project. It had multiple papers from it, multiple researchers, some awards, and most notably, it beat NASA at a particular Tech Briefs competition. So that was quite a morale boost.

And at some point, Jasper had the idea to integrate AI into the project. Rightfully, he saw the real bottleneck as not the detecting of land mines in drone imagery, but the analysis of land mines in drone imagery. And that really has become— I mean, he knew, somehow, that that would really become the issue that everybody is facing. And everybody we talked to in Ukraine is facing that issue. So machine learning really was the key for solving that problem. And I joined the project in 2018 to integrate machine learning into the research project. We had some more papers, some more presentations, and we were nearing the end of our college tenure, of our undergraduate degree, in 2020. So at that time– but at that time, we realized how much the field needed this. We started getting more and more into the mine action field, and realizing how neglected the field was in terms of technology and innovation. And we felt an obligation to bring our technology, really, to the real world instead of just a research project. There were plenty of research projects about this, but we knew that it could be more and that it should. It really should be more. And we felt we had the– for some reason, we felt like we had the capability to make that happen.

So we formed a nonprofit, the Demining Research Community, in 2020 to try to raise some funding for this project. Our for-profit end of that, of our endeavors, was acquired by a company called Safe Pro Group in 2023. Yeah, 2023, about one year ago exactly. And the drone and AI technology became Safe Pro AI and our flagship product spotlight. And that’s where we’re bringing the technology to the real world. The Demining Research Community is providing resources for other organizations who want to do a similar thing, and is doing more research into more nascent technologies. But yeah, the real drone and AI stuff that’s happening in the real world right now is through Safe Pro.

Strickland: So in that early undergraduate work, you were using thermal sensors. I know now the Spotlight AI system is using more visual. Can you talk about the different modalities of sensing explosives and the sort of trade-offs you get with them?

Steinberg: Sure. So I feel like I should preface this by saying the more high tech and nascent the technology is, the more people want to see it apply to land mine detection. But really, we have found from the problems that people are facing, by far the most effective modality right now is just visual imagery. People have really good visual sensors built into their face, and you don’t need a trained geophysicist to observe the data and very, very quickly get actionable intelligence. There’s also plenty of other benefits. It’s cheaper, much more readily accessible in Ukraine and around the world to get built-in visual sensors on drones. And yeah, just processing the data, and getting the intelligence from the data, is way easier than anything else.

I’ll talk about three different modalities. Well, I guess I could talk about four. There’s thermal, ground penetrating radar, magnetometry, and lidar. So thermal is what we started with. Thermal is really good at detecting living things, as I’m sure most people can surmise. But it’s also pretty good at detecting land mines, mostly large anti-tank land mines buried under a couple millimeters, or up to a couple centimeters, of soil. It’s not super good at this. The research is still not super conclusive, and you have to do it at a very specific time of day, in the morning and at night when, basically the soil around the land mine heats up faster than the land mine and you cause a thermal anomaly, or the sun causes a thermal anomaly. So it can detect things, land mines, in some amount of depth in certain soils, in certain weather conditions, and can only detect certain types of land mines that are big and hefty enough. So yeah, that’s thermal.

Ground penetrating radar is really good for some things. It’s not really great for land mine detection. You have to have really expensive equipment. It takes a really long time to do the surveys. However, it can get plastic land mines under the surface. And it’s kind of the only modality that can do that with reliability. However, you need to train geophysicists to analyze the data. And a lot of the time, the signatures are really non-unique and there’s going to be a lot of false positives. Magnetometry is the other-- by the way, all of this is airborne that I’m referring to. Ground-based GPR and magnetometry are used in demining of various types, but airborne is really what I’m talking about.

For magnetometry, it’s more developed and more capable than ground penetrating radar. It’s used, actually, in the field in Ukraine in some scenarios, but it’s still very expensive. It needs a trained geophysicist to analyze the data, and the signatures are non-unique. So whether it’s a bottle can or a small anti-personnel land mine, you really don’t know until you dig it up. However, I think if I were to bet on one of the other modalities becoming increasingly useful in the next couple of years, it would be airborne magnetometry.

Lidar is another modality that people use. It’s pretty quick, also very expensive, but it can reliably map and find surface anomalies. So if you want to find former fighting positions, sometimes an indicator of that is a trench line or foxholes. Lidar is really good at doing that in conflicts from long ago. So there’s a paper that the HALO Trust published of flying a lidar mission over former fighting positions, I believe, in Angola. And they reliably found a former trench line. And from that information, they confirmed that as a hazardous area. Because if there is a former front line on this position, you can pretty reliably say that there is going to be some explosives there.

Strickland: And so you’ve done some experiments with some of these modalities, but in the end, you found that the visual sensor was really the best bet for you guys?

Steinberg: Yeah. It’s different. The requirements are different for different scenarios and different locations, really. Ukraine has a lot of surface ordnance. Yeah. And that’s really the main factor that allows visual imagery to be so powerful.

Strickland: So tell me about what role machine learning plays in your Spotlight AI software system. Did you create a model trained on a lot of— did you create a model based on a lot of data showing land mines on the surface?

Steinberg: Yeah. Exactly. We used real-world data from inert, non-explosive items, and flew drone missions over them, and did some physical augmentation and some programmatic augmentation. But all of the items that we are training on are real-life Russian or American ordnance, mostly. We’re also using the real-world data in real minefields that we’re getting from Ukraine right now. That is, obviously, the most valuable data and the most effective in building a machine learning model. But yeah, a lot of our data is from inert explosives, as well.

Strickland: So you’ve talked a little bit about the current situation in Ukraine, but can you tell me more about what people are dealing with there? Are there a lot of areas where the battle has moved on and civilians are trying to reclaim roads or fields?

Steinberg: Yeah. So the fighting is constantly ongoing, obviously, in eastern Ukraine, but I think sometimes there’s a perspective of a stalemate. I think that’s a little misleading. There’s lots of action and violence happening on the front line, which constantly contaminates, cumulatively, the areas that are the front line and the gray zone, as well as areas up to 50 kilometers back from both sides. So there’s constantly artillery shells going into villages and cities along the front line. There’s constantly land mines, new mines, being laid to reinforce the positions. And there’s constantly mortars. And everything is constant. In some fights—I just watched the video yesterday—one of the soldiers said you could not count to five without an explosion going off. And this is just one location in one city along the front. So you can imagine the amount of explosive ordnance that are being fired, and inevitably 10, 20, 30 percent of them are sometimes not exploding upon impact, on top of all the land mines that are being purposely laid and not detonating from a vehicle or a person. These all just remain after the war. They don’t go anywhere. So yeah, Ukraine is really being littered with explosive ordnance and land mines every day.

This past year, there hasn’t been terribly much movement on the front line. But in the Ukrainian counteroffensive in 2020— I guess the last major Ukrainian counteroffensive where areas of Mykolaiv, which is in the southeast, were reclaimed, the civilians started repopulating the city almost immediately. There are definitely some villages that are heavily contaminated, that people just deserted and never came back to, and still haven’t come back to after them being liberated. But a lot of the areas that have been liberated, they’re people’s homes. And even if they’re destroyed, people would rather be in their homes than be refugees. And I mean, I totally understand that. And it just puts the responsibility on the deminers and the Ukrainian government to try to clear the land as fast as possible. Because after large liberations are made, people want to come back almost all the time. So it is a very urgent problem as the lines change and as land is liberated.

Strickland: And I think it was about a year ago that you and Jasper went to the Ukraine for a technology demonstration set up by the United Nations. Can you tell about that, and what the task was, and how your technology fared?

Steinberg: Sure. So yeah, the United Nations Development Program invited us to do a demonstration in northern Ukraine to see how our technology, and other technologies similar to it, performed in a military training facility in Ukraine. So everybody who’s doing this kind of thing, which is not many people, but there are some other organizations, they have their own metrics and their own test fields— not always, but it would be good if they did. But the UNDP said, “No, we want to standardize this and try to give recommendations to the organizations on the ground who are trying to adopt these technologies.” So we had five hours to survey the field and collect as much data as we could. And then we had 72 hours to return the results. We—

Strickland: Sorry. How big was the field?

Steinberg: The field was 25 hectares. So yeah, the audience at home can type 25 hectares to amount of football fields. I think it’s about 60. But it’s a large area. So we’d never done anything like that. That was really, really a shock that it was that large of an area. I think we’d only done half a hectare at a time up to that point. So yeah, it was pretty daunting. But we basically slept very, very little in those 72 hours, and as a result, produced what I think is one of the best results that the UNDP got from that test. We didn’t detect everything, but we detected most of the ordnance and land mines that they had laid. We also detected some that they didn’t know were there because it was a military training facility. So there were some mortars being fired that they didn’t know about.

Strickland: And I think Jasper told me that you had to sort of rewrite your software on the fly. You realized that the existing approach wasn’t going to work and you had to do some all-nighter to recode?

Steinberg: Yeah. Yeah, I remember us sitting in a Georgian restaurant— Georgia, the country, not the state, and racking our brain, trying to figure out how we were going to map this amount of land. We just found out how big the area was going to be and we were a little bit stunned. So we devised a plan to do it in two stages. The first stage was where we figured out in the drone images where the contaminated regions were. And then the second stage was to map those areas, just those areas. Now, our software can actually map the whole thing, and pretty casually too. So not to brag. But at the time, we had lots less development under our belt. And yeah, therefore we just had to brute force it through Georgian food and brainpower.

Strickland: You and Jasper just got back from another trip to the Ukraine a couple of weeks ago, I think. Can you talk about what you were doing on this trip, and who you met with?

Steinberg: Sure. This trip was much less stressful, although stressful in different ways than the UNDP demo. Our main objectives were to see operations in action. We had never actually been to real minefields before. We’d been in some perhaps contaminated areas, but never in a real minefield where you can say, “Here was the Russian position. There are the land mines. Do not go there.” So that was one of the main objectives. That was very powerful for us to see the villages that were destroyed and are denied to the citizens because of land mines and unexploded ordnance. It’s impossible to describe how that feels being there. It’s really impactful, and it makes the work that I’m doing feel not like I have a choice anymore. I feel very much obligated to do my absolute best to help these people.

Strickland: Well, I hope your work continues. I hope there’s less and less need for it over time. But yeah, thank you for doing this. It’s important work. And thanks for joining me on Fixing the Future.

Steinberg: My pleasure. Thank you for having me.

Strickland: That was Gabriel Steinberg speaking to me about the technology that he and Jasper Baur developed to help rid the world of land mines. I’m Eliza Strickland, and I hope you’ll join us next time on Fixing the Future.

Andrew Ng: Unbiggen AI



Andrew Ng has serious street cred in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at Stanford University, cofounded Google Brain in 2011, and then served for three years as chief scientist for Baidu, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told IEEE Spectrum in an exclusive Q&A.


Ng’s current efforts are focused on his company Landing AI, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the data-centric AI movement, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.

Andrew Ng on...

The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an unsustainable trajectory. Do you agree that it can’t go on that way?

Andrew Ng: This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.

When you say you want a foundation model for computer vision, what do you mean by that?

Ng: This is a term coined by Percy Liang and some of my friends at Stanford to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, GPT-3 is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.

What needs to happen for someone to build a foundation model for video?

Ng: I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.

Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.

Back to top

It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.

Ng: Over a decade ago, when I proposed starting the Google Brain project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.

“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”
—Andrew Ng, CEO & Founder, Landing AI

I remember when my students and I published the first NeurIPS workshop paper advocating using CUDA, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.

I expect they’re both convinced now.

Ng: I think so, yes.

Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”

Back to top

How do you define data-centric AI, and why do you consider it a movement?

Ng: Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.

When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.

The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a data-centric AI workshop at NeurIPS, and I was really delighted at the number of authors and presenters that showed up.

You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?

Ng: You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.

When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?

Ng: Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of RetinaNet. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.

“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”
—Andrew Ng

For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.

Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?

Ng: Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, Olga Russakovsky gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed Mary Gray’s presentation, which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like Datasheets for Datasets also seem like an important piece of the puzzle.

One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.

When you talk about engineering the data, what do you mean exactly?

Ng: In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a Jupyter notebook and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.

For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.

Back to top

What about using synthetic data, is that often a good solution?

Ng: I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, Anima Anandkumar gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.

Do you mean that synthetic data would allow you to try the model on more data sets?

Ng: Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.

“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”
—Andrew Ng

Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.

Back to top

To make these issues more concrete, can you walk me through an example? When a company approaches Landing AI and says it has a problem with visual inspection, how do you onboard them and work toward deployment?

Ng: When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the LandingLens platform. We often advise them on the methodology of data-centric AI and help them label the data.

One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.

How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?

Ng: It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.

In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?

So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.

Ng: Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.

Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?

Ng: In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.

Back to top

This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist.”

❌