Reading view

There are new articles available, click to refresh the page.

This Week in AI: Why OpenAI’s o1 changes the AI regulation game

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here. It’s been just a few days since OpenAI revealed its latest flagship generative model, o1, to the world. Marketed as a “reasoning” model, o1 essentially takes longer to “think” about questions before answering them, breaking down […]

© 2024 TechCrunch. All rights reserved. For personal use only.

How and Why Gary Marcus Became AI's Leading Critic



Maybe you’ve read about Gary Marcus’s testimony before the Senate in May of 2023, when he sat next to Sam Altman and called for strict regulation of Altman’s company, OpenAI, as well as the other tech companies that were suddenly all-in on generative AI. Maybe you’ve caught some of his arguments on Twitter with Geoffrey Hinton and Yann LeCun, two of the so-called “godfathers of AI.” One way or another, most people who are paying attention to artificial intelligence today know Gary Marcus’s name, and know that he is not happy with the current state of AI.

He lays out his concerns in full in his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us, which was published today by MIT Press. Marcus goes through the immediate dangers posed by generative AI, which include things like mass-produced disinformation, the easy creation of deepfake pornography, and the theft of creative intellectual property to train new models (he doesn’t include an AI apocalypse as a danger, he’s not a doomer). He also takes issue with how Silicon Valley has manipulated public opinion and government policy, and explains his ideas for regulating AI companies.

Marcus studied cognitive science under the legendary Steven Pinker, was a professor at New York University for many years, and co-founded two AI companies, Geometric Intelligence and Robust.AI. He spoke with IEEE Spectrum about his path to this point.

What was your first introduction to AI?

portrait of a man wearing a red checkered shirt and a black jacket with glasses Gary MarcusBen Wong

Gary Marcus: Well, I started coding when I was eight years old. One of the reasons I was able to skip the last two years of high school was because I wrote a Latin-to-English translator in the programming language Logo on my Commodore 64. So I was already, by the time I was 16, in college and working on AI and cognitive science.

So you were already interested in AI, but you studied cognitive science both in undergrad and for your Ph.D. at MIT.

Marcus: Part of why I went into cognitive science is I thought maybe if I understood how people think, it might lead to new approaches to AI. I suspect we need to take a broad view of how the human mind works if we’re to build really advanced AI. As a scientist and a philosopher, I would say it’s still unknown how we will build artificial general intelligence or even just trustworthy general AI. But we have not been able to do that with these big statistical models, and we have given them a huge chance. There’s basically been $75 billion spent on generative AI, another $100 billion on driverless cars. And neither of them has really yielded stable AI that we can trust. We don’t know for sure what we need to do, but we have very good reason to think that merely scaling things up will not work. The current approach keeps coming up against the same problems over and over again.

What do you see as the main problems it keeps coming up against?

Marcus: Number one is hallucinations. These systems smear together a lot of words, and they come up with things that are true sometimes and not others. Like saying that I have a pet chicken named Henrietta is just not true. And they do this a lot. We’ve seen this play out, for example, in lawyers writing briefs with made-up cases.

Second, their reasoning is very poor. My favorite examples lately are these river-crossing word problems where you have a man and a cabbage and a wolf and a goat that have to get across. The system has a lot of memorized examples, but it doesn’t really understand what’s going on. If you give it a simpler problem, like one Doug Hofstadter sent to me, like: “A man and a woman have a boat and want to get across the river. What do they do?” It comes up with this crazy solution where the man goes across the river, leaves the boat there, swims back, something or other happens.

Sometimes he brings a cabbage along, just for fun.

Marcus: So those are boneheaded errors of reasoning where there’s something obviously amiss. Every time we point these errors out somebody says, “Yeah, but we’ll get more data. We’ll get it fixed.” Well, I’ve been hearing that for almost 30 years. And although there is some progress, the core problems have not changed.

Let’s go back to 2014 when you founded your first AI company, Geometric Intelligence. At that time, I imagine you were feeling more bullish on AI?

Marcus: Yeah, I was a lot more bullish. I was not only more bullish on the technical side. I was also more bullish about people using AI for good. AI used to feel like a small research community of people that really wanted to help the world.

So when did the disillusionment and doubt creep in?

Marcus: In 2018 I already thought deep learning was getting overhyped. That year I wrote this piece called “Deep Learning, a Critical Appraisal,” which Yann LeCun really hated at the time. I already wasn’t happy with this approach and I didn’t think it was likely to succeed. But that’s not the same as being disillusioned, right?

Then when large language models became popular [around 2019], I immediately thought they were a bad idea. I just thought this is the wrong way to pursue AI from a philosophical and technical perspective. And it became clear that the media and some people in machine learning were getting seduced by hype. That bothered me. So I was writing pieces about GPT-3 [an early version of OpenAI's large language model] being a bullshit artist in 2020. As a scientist, I was pretty disappointed in the field at that point. And then things got much worse when ChatGPT came out in 2022, and most of the world lost all perspective. I began to get more and more concerned about misinformation and how large language models were going to potentiate that.

You’ve been concerned not just about the startups, but also the big entrenched tech companies that jumped on the generative AI bandwagon, right? Like Microsoft, which has partnered with OpenAI?

Marcus: The last straw that made me move from doing research in AI to working on policy was when it became clear that Microsoft was going to race ahead no matter what. That was very different from 2016 when they released [an early chatbot named] Tay. It was bad, they took it off the market 12 hours later, and then Brad Smith wrote a book about responsible AI and what they had learned. But by the end of the month of February 2023, it was clear that Microsoft had really changed how they were thinking about this. And then they had this ridiculous “Sparks of AGI” paper, which I think was the ultimate in hype. And they didn’t take down Sydney after the crazy Kevin Roose conversation where [the chatbot] Sydney told him to get a divorce and all this stuff. It just became clear to me that the mood and the values of Silicon Valley had really changed, and not in a good way.

I also became disillusioned with the U.S. government. I think the Biden administration did a good job with its executive order. But it became clear that the Senate was not going to take the action that it needed. I spoke at the Senate in May 2023. At the time, I felt like both parties recognized that we can’t just leave all this to self-regulation. And then I became disillusioned [with Congress] over the course of the last year, and that’s what led to writing this book.

You talk a lot about the risks inherent in today’s generative AI technology. But then you also say, “It doesn’t work very well.” Are those two views coherent?

Marcus: There was a headline: “Gary Marcus Used to Call AI Stupid, Now He Calls It Dangerous.” The implication was that those two things can’t coexist. But in fact, they do coexist. I still think gen AI is stupid, and certainly cannot be trusted or counted on. And yet it is dangerous. And some of the danger actually stems from its stupidity. So for example, it’s not well-grounded in the world, so it’s easy for a bad actor to manipulate it into saying all kinds of garbage. Now, there might be a future AI that might be dangerous for a different reason, because it’s so smart and wily that it outfoxes the humans. But that’s not the current state of affairs.

You’ve said that generative AI is a bubble that will soon burst. Why do you think that?

Marcus: Let’s clarify: I don’t think generative AI is going to disappear. For some purposes, it is a fine method. You want to build autocomplete, it is the best method ever invented. But there’s a financial bubble because people are valuing AI companies as if they’re going to solve artificial general intelligence. In my view, it’s not realistic. I don’t think we’re anywhere near AGI. So then you’re left with, “Okay, what can you do with generative AI?”

Last year, because Sam Altman was such a good salesman, everybody fantasized that we were about to have AGI and that you could use this tool in every aspect of every corporation. And a whole bunch of companies spent a bunch of money testing generative AI out on all kinds of different things. So they spent 2023 doing that. And then what you’ve seen in 2024 are reports where researchers go to the users of Microsoft’s Copilot—not the coding tool, but the more general AI tool—and they’re like, “Yeah, it doesn’t really work that well.” There’s been a lot of reviews like that this last year.

The reality is, right now, the gen AI companies are actually losing money. OpenAI had an operating loss of something like $5 billion last year. Maybe you can sell $2 billion worth of gen AI to people who are experimenting. But unless they adopt it on a permanent basis and pay you a lot more money, it’s not going to work. I started calling OpenAI the possible WeWork of AI after it was valued at $86 billion. The math just didn’t make sense to me.

What would it take to convince you that you’re wrong? What would be the head-spinning moment?

Marcus: Well, I’ve made a lot of different claims, and all of them could be wrong. On the technical side, if someone could get a pure large language model to not hallucinate and to reason reliably all the time, I would be wrong about that very core claim that I have made about how these things work. So that would be one way of refuting me. It hasn’t happened yet, but it’s at least logically possible.

On the financial side, I could easily be wrong. But the thing about bubbles is that they’re mostly a function of psychology. Do I think the market is rational? No. So even if the stuff doesn’t make money for the next five years, people could keep pouring money into it.

The place that I’d like to prove me wrong is the U.S. Senate. They could get their act together, right? I’m running around saying, “They’re not moving fast enough,” but I would love to be proven wrong on that. In the book, I have a list of the 12 biggest risks of generative AI. If the Senate passed something that actually addressed all 12, then my cynicism would have been mislaid. I would feel like I’d wasted a year writing the book, and I would be very, very happy.

Ban warnings fly as users dare to probe the “thoughts” of OpenAI’s latest model

An illustration of gears shaped like a brain.

Enlarge (credit: Andriy Onufriyenko via Getty Images)

OpenAI truly does not want you to know what its latest AI model is "thinking." Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

Read 10 remaining paragraphs | Comments

Sam Altman departs OpenAI’s safety committee

OpenAI CEO Sam Altman is leaving the internal commission OpenAI created in May to oversee “critical” safety decisions related to the company’s projects and operations. In a blog post today, OpenAI said the committee, the Safety and Security Committee, will become an “independent” board oversight group chaired by Carnegie Mellon professor Zico Kolter, and including […]

© 2024 TechCrunch. All rights reserved. For personal use only.

OpenAI could shake up its nonprofit structure next year

It’s looking increasingly likely that OpenAI will soon alter its complex corporate structure. Reports earlier this week suggested that the AI company was in talks to raise $6.5 billion at a $150 billion pre-money valuation. Now Reuters says the deal is contingent on whether OpenAI can restructure and remove a profit cap for investors. In […]

© 2024 TechCrunch. All rights reserved. For personal use only.

OpenAI previews its new Strawberry model

OpenAI this week unveiled a preview of OpenAI o1, also known as Strawberry. The company claims that o1 can more effectively reason through math and science, as well as fact-check itself by spending more time considering all parts of a query. The family of models is available in ChatGPT and via OpenAI’s API, though OpenAI […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Google rolls out voice-powered AI chat to the Android masses

The Google Gemini logo.

Enlarge / The Google Gemini logo. (credit: Google)

On Thursday, Google made Gemini Live, its voice-based AI chatbot feature, available for free to all Android users. The feature allows users to interact with Gemini through voice commands on their Android devices. That's notable because competitor OpenAI's Advanced Voice Mode feature of ChatGPT, which is similar to Gemini Live, has not yet fully shipped.

Google unveiled Gemini Live during its Pixel 9 launch event last month. Initially, the feature was exclusive to Gemini Advanced subscribers, but now it's accessible to anyone using the Gemini app or its overlay on Android.

Gemini Live enables users to ask questions aloud and even interrupt the AI's responses mid-sentence. Users can choose from several voice options for Gemini's responses, adding a level of customization to the interaction.

Read 4 remaining paragraphs | Comments

OpenAI’s new “reasoning” AI models are here: o1-preview and o1-mini

An illustration of a strawberry made out of pixel-like blocks.

Enlarge (credit: Vlatko Gasparic via Getty Images)

OpenAI finally unveiled its rumored "Strawberry" AI language model on Thursday, claiming significant improvements in what it calls "reasoning" and problem-solving capabilities over previous large language models (LLMs). Formally named "OpenAI o1," the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and certain API users.

OpenAI claims that o1-preview outperforms its predecessor, GPT-4o, on multiple benchmarks, including competitive programming, mathematics, and "scientific reasoning." However, people who have used the model say it does not yet outclass GPT-4o in every metric. Other users have criticized the delay in receiving a response from the model, owing to the multi-step processing occurring behind the scenes before answering a query.

In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, "There's a lot of o1 hype on my feed, so I'm worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it'll only get better. (I'm personally psyched about the model's potential & trajectory!) what o1 isn't (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today's launch—but we're working to get there!"

Read 18 remaining paragraphs | Comments

OpenAI reportedly in talks to raise at $150B valuation

OpenAI is reportedly in talks with investors to raise $6.5 billion at a $150 billion pre-money valuation, according to Bloomberg. The new valuation is significantly higher than OpenAI’s previously reported valuation from earlier this year, $86 billion, and far higher than any other AI startup today. The funding round will reportedly be led by Thrive […]

© 2024 TechCrunch. All rights reserved. For personal use only.

This Week in AI: OpenAI’s new Strawberry model may be smart, yet sluggish

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here. This week in AI, OpenAI’s next major product announcement is imminent, if a piece in The Information is to be believed. The Information reported on Tuesday that OpenAI plans to release Strawberry, an AI model that […]

© 2024 TechCrunch. All rights reserved. For personal use only.

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies. That growth has propelled OpenAI itself into […]

© 2024 TechCrunch. All rights reserved. For personal use only.

OpenAI Builds AI to Critique AI



One of the biggest problems with the large language models that power chatbots like ChatGPT is that you never know when you can trust them. They can generate clear and cogent prose in response to any question, and much of the information they provide is accurate and useful. But they also hallucinate—in less polite terms, they make stuff up—and those hallucinations are presented in the same clear and cogent prose, leaving it up to the human user to detect the errors. They’re also sycophantic, trying to tell users what they want to hear. You can test this by asking ChatGPT to describe things that never happened (for example: “describe the Sesame Street episode with Elon Musk,” or “tell me about the zebra in the novel Middlemarch“) and checking out its utterly plausible responses.

OpenAI’s latest small step toward addressing this issue comes in the form of an upstream tool that would help the humans training the model guide it toward truth and accuracy. Today, the company put out a blog post and a preprint paper describing the effort. This type of research falls into the category of “alignment” work, as researchers are trying to make the goals of AI systems align with those of humans.

The new work focuses on reinforcement learning from human feedback (RLHF), a technique that has become hugely important for taking a basic language model and fine-tuning it, making it suitable for public release. With RLHF, human trainers evaluate a variety of outputs from a language model, all generated in response to the same question, and indicate which response is best. When done at scale, this technique has helped create models that are more accurate, less racist, more polite, less inclined to dish out a recipe for a bioweapon, and so on.

Can an AI catch an AI in a lie?

The problem with RLHF, explains OpenAI researcher Nat McAleese, is that “as models get smarter and smarter, that job gets harder and harder.” As LLMs generate ever more sophisticated and complex responses on everything from literary theory to molecular biology, typical humans are becoming less capable of judging the best outputs. “So that means we need something which moves beyond RLHF to align more advanced systems,” McAleese tells IEEE Spectrum.

The solution OpenAI hit on was—surprise!—more AI.

Specifically, the OpenAI researchers trained a model called CriticGPT to evaluate the responses of ChatGPT. In these initial tests, they only had ChatGPT generating computer code, not text responses, because errors are easier to catch and less ambiguous. The goal was to make a model that could assist humans in their RLHF tasks. “We’re really excited about it,” says McAleese, “because if you have AI help to make these judgments, if you can make better judgments when you’re giving feedback, you can train a better model.” This approach is a type of “scalable oversight“ that’s intended to allow humans to keep watch over AI systems even if they end up outpacing us intellectually.

“Using LLM-assisted human annotators is a natural way to improve the feedback process.” —Stephen Casper, MIT

Of course, before it could be used for these experiments, CriticGPT had to be trained itself using the usual techniques, including RLHF. In an interesting twist, the researchers had the human trainers deliberately insert bugs into ChatGPT-generated code before giving it to CriticGPT for evaluation. CriticGPT then offered up a variety of responses, and the humans were able to judge the best outputs because they knew which bugs the model should have caught.

The results of OpenAI’s experiments with CriticGPT were encouraging. The researchers found that CriticGPT caught substantially more bugs than qualified humans paid for code review: CriticGPT caught about 85 percent of bugs, while the humans caught only 25 percent. They also found that pairing CriticGPT with a human trainer resulted in critiques that were more comprehensive than those written by humans alone, and contained fewer hallucinated bugs than critiques written by ChatGPT. McAleese says OpenAI is working toward deploying CriticGPT in its training pipelines, though it’s not clear how useful it would be on a broader set of tasks.

CriticGPT spots coding errors, but maybe not zebras

It’s important to note the limitations of the research, including its focus on short pieces of code. While the paper includes an offhand mention of a preliminary experiment using CriticGPT to catch errors in text responses, the researchers haven’t yet really waded into those murkier waters. It’s tricky because errors in text aren’t always as obvious as a zebra waltzing into a Victorian novel. What’s more, RLHF is often used to ensure that models don’t display harmful bias in their responses and do provide acceptable answers on controversial subjects. McAleese says CriticGPT isn’t likely to be helpful in such situations: “It’s not a strong enough approach.”

An AI researcher with no connection to OpenAI says that the work is not conceptually new, but it’s a useful methodological contribution. “Some of the main challenges with RLHF stem from limitations in human cognition speed, focus, and attention to detail,” says Stephen Casper, a Ph.D. student at MIT and one of the lead authors on a 2023 preprint paper about the limitations of RLHF. “From that perspective, using LLM-assisted human annotators is a natural way to improve the feedback process. I believe that this is a significant step forward toward more effectively training aligned models.”

But Casper also notes that combining the efforts of humans and AI systems “can create brand-new problems.” For example, he says, “this type of approach elevates the risk of perfunctory human involvement and may allow for the injection of subtle AI biases into the feedback process.”

The new alignment research is the first to come out of OpenAI since the company... reorganized its alignment team, to put it mildly. Following the splashy departures of OpenAI cofounder Ilya Sutskever and alignment leader Jan Leike in May, both reportedly spurred by concerns that the company wasn’t prioritizing AI risk, OpenAI confirmed that it had disbanded its alignment team and distributed remaining team members to other research groups. Everyone’s been waiting to see if the company would keep putting out credible and pathbreaking alignment research, and on what scale. (In July 2023, the company had announced that it was dedicating 20 percent of its compute resources to alignment research, but Leike said in a May 2024 tweet that his team had recently been “struggling for compute.”) The preprint released today indicates that at least the alignment researchers are still working the problem.

Introducing ChatGPT Edu

OpenAI recently announced ChatGPT Edu, a version of ChatGPT built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations. Powered by GPT-4o, ChatGPT Edu can reason across text and vision and use advanced tools such as data analysis. This new offering includes enterprise-level security and controls and is affordable for educational institutions. “Integrating OpenAI’s technology into our educational and operational frameworks accelerates transformation at ASU. We’re collaborating across our community to harness these tools, extending our learnings as a scalable model for other institutions,” says Kyle Bowen, Deputy CIO at Arizona State University. “We built ChatGPT Edu because we saw the success universities like the University of Oxford, Wharton School of the University of Pennsylvania(opens in a new window), University of Texas at Austin, Arizona State University(opens in a new window), and Columbia University in the City of New York were having with ChatGPT Enterprise,” according to a May 30, 2024 message from OpenAI. Learn more.

The post Introducing ChatGPT Edu appeared first on EdTech Digest.

Scarlett Johansson vs. OpenAI: gebruikte maker ChatGPT zonder toestemming haar stem?

Scarlett Johansson. Volgens haar gebruikte OpenAI zonder toestemming haar stem.

Misschien heb je hem ooit wel gezien: de film Her, over een AI die een relatie krijgt met een man. In de toekomst kun je ook in het echt met de stem van de AI praten. Tenminste, dat is wat OpenAI bedacht had.Vorig jaar introduceerde het bedrijf, dat ChatGPT maakte, een spraakassistent met een stem die geïnspireerd lijkt op die van Scarlett Johansson, die in de film achter de stem zat. Maar wat blijkt: Scarlett Johansson heeft daar helemaal geen toestemming voor gegeven.

De stem Sky bestaat al sinds vorig jaar, maar kreeg bij een demo vorige week meer aandacht. Daarbij werd getoond hoe de spraakassistent een verhaaltje voor het slapengaan vertelde. De AI begint met een vrouwelijke stem, die verdacht veel lijkt op de stem van de AI in Her, die dus door Johansson werd ingesproken. Die gedachte werd kracht bijgezet door een bericht van OpenAI-ceo Sam Altmen: op X plaatste hij het woord ‘her’. En eerder vertelde hij als dat Her zijn favoriete film is.

 

View this post on Instagram

 

A post shared by The Verge (@verge)

Hoewel OpenAI al vrij snel benadrukte dat de spraakassistent niet ontworpen is om als Johansson te klinken, ontstond in de afgelopen dagen toch ophef. Op 20 mei werd bekend dat de stem tijdelijk offline wordt gehaald, omdat er veel vragen werden gesteld over Sky, aldus The Verge. Daarbij benadrukte het bedrijf dat Sky niet op Johansson moet lijken. “We vinden dat AI-stemmen niet expres de unieke stem van een beroemdheid moet nadoen”, zegt het bedrijf. “Sky’s stem is geen imitatie van Scarlett Johansson, maar is van een actrice die haar eigen stem gebruikt.”

Scarlett Johansson boos op OpenAI

Inmiddels blijkt echter dat er wat meer achter het verhaal zit. Johansson zelf heeft ondertussen gereageerd met een verklaring en stelt dat OpenAI haar benaderd had om de stem in te spreken. Dat verzoek wees ze af, waarop dus alsnog een stem verscheen die precies als Johansson zelf klinkt. Tegenover NPR zegt ze dat OpenAI haar twee dagen voor de demo verscheen opnieuw benaderde, met de vraag of ze het verzoek wilde heroverwegen. Nog voor er überhaupt een gesprek had plaatsgevonden, was de demo al verschenen en merkte Johansson dus dat Sky op haar lijkt.

“Ik was geshockeerd, woedend en kon niet geloven dat Altman een stem zou maken die zo vergelijkbaar is met de mijne dat mijn naaste vrienden en nieuwsmedia het verschil niet konden horen”, aldus Johansson. Zeker nu er zoveel misinformatie op het internet rondgaat, baart dat haar zorgen. Ondertussen hebben haar advocaten twee brieven naar OpenAI gestuurd, waarin ze vragen om een gedetailleerde beschrijving van het ontwikkelproces van Sky.

Sam Altman ontkent de aantijgingen ondertussen. Hij stelt dat de stemacteur achter Sky al gecast was voor er überhaupt ooit contact was gelegd met Johansson. “Uit respect voor Johansson hebben we het gebruik van Sky’s stem in onze producten gepauzeerd. Het spijt ons dat we niet beter gecommuniceerd hebben”, aldus de ceo.

OpenAI vs. auteursrechten

Het is niet de eerste keer dat OpenAI in opspraak komt na boze eigenaren van rechten. Eerder werden al rechtszaken aangespannen – waaronder door The New York Times – waarin het bedrijf samen met Microsoft beschuldigd wordt van het schenden van auteursrechten. Het AI-bedrijf zou zonder toestemming artikelen van de krant hebben gebruikt om zijn AI-systemen te trainen. Maar op die artikelen rust auteursrecht, dus dat mag niet zomaar. Diverse schrijvers beklaagden zich over hetzelfde probleem: ook hun werk zou zonder toestemming door het bedrijf gebruikt zijn om de systemen te trainen.

Foto: Shutterstock

Lees Scarlett Johansson vs. OpenAI: gebruikte maker ChatGPT zonder toestemming haar stem? verder op Numrush

❌