Normal view

There are new articles available, click to refresh the page.
Today — 20 September 2024Main stream

How to stop LinkedIn from training AI on your data

20 September 2024 at 00:00
How to stop LinkedIn from training AI on your data

Enlarge (credit: NurPhoto / Contributor | NurPhoto)

LinkedIn admitted Wednesday that it has been training its own AI on many users' data without seeking consent. Now there's no way for users to opt out of training that has already occurred, as LinkedIn limits opt-out to only future AI training.

In a blog detailing updates coming on November 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn's user agreement and privacy policy will be changed to better explain how users' personal data powers AI on the platform.

Under the new privacy policy, LinkedIn now informs users that "we may use your personal data... [to] develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others."

Read 19 remaining paragraphs | Comments

Yesterday — 19 September 2024Main stream

Amazon releases a video generator — but only for ads

19 September 2024 at 16:01

Like its rival, Google, Amazon has launched an AI-powered video generator — but it’s only for advertisers at the moment, and somewhat limited in what it can do. Today at its Accelerate conference, Amazon unveiled Video generator, which turns a single product image into video showcases of that product after some amount of processing. The […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Fal.ai, which hosts media-generating AI models, raises $23M from a16z and others

18 September 2024 at 22:14

Fal.ai, a dev-focused platform for AI-generated audio, video, and images, today revealed that it’s raised $23 million in funding from investors including Andreessen Horowitz (a16z), Black Forest Labs co-founder Robin Rombach, and Perplexity CEO Aravind Srinivas. It’s a two-round deal: $14 million of Fal’s total came from a Series A tranche led by Kindred Ventures; […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Before yesterdayMain stream

LinkedIn scraped user data for training before updating its terms of service

18 September 2024 at 19:15

LinkedIn may have trained AI models on user data without updating its terms. LinkedIn users in the U.S. — but not the EU, EEA, or Switzerland, likely due to those regions’ data privacy rules — have an opt-out toggle in their settings screen disclosing that LinkedIn scrapes personal data to train “content creation AI models.” […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Generative AI startup Runway inks deal with a major Hollywood studio

18 September 2024 at 15:36

Runway, a startup developing AI video tools, including video-generating models, has partnered with Lionsgate — the studio behind the “John Wick” and “Twilight” franchises — to train a custom video model on Lionsgate’s movie catalog. Lionsgate vice chair Michael Burns said in a statement that the studio’s “filmmakers, directors and other creative talent” will get […]

© 2024 TechCrunch. All rights reserved. For personal use only.

AWS shuts down DeepComposer, its MIDI keyboard for AI music

17 September 2024 at 19:51

AWS’ weird AI-powered keyboard experiment, DeepComposer, is no more. In a blog post today, the company announced it’s shutting down the 5-year-old DeepComposer, a physical MIDI piano and AWS service that let users compose songs with the help of generative AI. “After careful consideration, we have made the decision to end support for AWS DeepComposer,” […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Snap’s new AI feature lets you create Snapchat Lenses by simply describing them

17 September 2024 at 19:50

Snap, Snapchat’s parent company, is expanding its suite of AI tools for creators. At this year’s Snap Partner Summit in Santa Monica, California, Snap announced a new feature, Easy Lens, that translates plain-English descriptions into Lenses, Snap’s brand of augmented reality (AR) objects, 3D effects, characters, and transformations for photos and videos. Available in Snap’s […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Google will begin flagging AI-generated images in Search later this year

17 September 2024 at 18:35

Google says that it plans to roll out changes to Google Search to make clearer which images in results were AI generated — or edited by AI tools. In the next few months, Google will begin to flag AI-generated and -edited images in the “About this image” window on Search, Google Lens, and the Circle […]

© 2024 TechCrunch. All rights reserved. For personal use only.

How and Why Gary Marcus Became AI's Leading Critic



Maybe you’ve read about Gary Marcus’s testimony before the Senate in May of 2023, when he sat next to Sam Altman and called for strict regulation of Altman’s company, OpenAI, as well as the other tech companies that were suddenly all-in on generative AI. Maybe you’ve caught some of his arguments on Twitter with Geoffrey Hinton and Yann LeCun, two of the so-called “godfathers of AI.” One way or another, most people who are paying attention to artificial intelligence today know Gary Marcus’s name, and know that he is not happy with the current state of AI.

He lays out his concerns in full in his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us, which was published today by MIT Press. Marcus goes through the immediate dangers posed by generative AI, which include things like mass-produced disinformation, the easy creation of deepfake pornography, and the theft of creative intellectual property to train new models (he doesn’t include an AI apocalypse as a danger, he’s not a doomer). He also takes issue with how Silicon Valley has manipulated public opinion and government policy, and explains his ideas for regulating AI companies.

Marcus studied cognitive science under the legendary Steven Pinker, was a professor at New York University for many years, and co-founded two AI companies, Geometric Intelligence and Robust.AI. He spoke with IEEE Spectrum about his path to this point.

What was your first introduction to AI?

portrait of a man wearing a red checkered shirt and a black jacket with glasses Gary MarcusBen Wong

Gary Marcus: Well, I started coding when I was eight years old. One of the reasons I was able to skip the last two years of high school was because I wrote a Latin-to-English translator in the programming language Logo on my Commodore 64. So I was already, by the time I was 16, in college and working on AI and cognitive science.

So you were already interested in AI, but you studied cognitive science both in undergrad and for your Ph.D. at MIT.

Marcus: Part of why I went into cognitive science is I thought maybe if I understood how people think, it might lead to new approaches to AI. I suspect we need to take a broad view of how the human mind works if we’re to build really advanced AI. As a scientist and a philosopher, I would say it’s still unknown how we will build artificial general intelligence or even just trustworthy general AI. But we have not been able to do that with these big statistical models, and we have given them a huge chance. There’s basically been $75 billion spent on generative AI, another $100 billion on driverless cars. And neither of them has really yielded stable AI that we can trust. We don’t know for sure what we need to do, but we have very good reason to think that merely scaling things up will not work. The current approach keeps coming up against the same problems over and over again.

What do you see as the main problems it keeps coming up against?

Marcus: Number one is hallucinations. These systems smear together a lot of words, and they come up with things that are true sometimes and not others. Like saying that I have a pet chicken named Henrietta is just not true. And they do this a lot. We’ve seen this play out, for example, in lawyers writing briefs with made-up cases.

Second, their reasoning is very poor. My favorite examples lately are these river-crossing word problems where you have a man and a cabbage and a wolf and a goat that have to get across. The system has a lot of memorized examples, but it doesn’t really understand what’s going on. If you give it a simpler problem, like one Doug Hofstadter sent to me, like: “A man and a woman have a boat and want to get across the river. What do they do?” It comes up with this crazy solution where the man goes across the river, leaves the boat there, swims back, something or other happens.

Sometimes he brings a cabbage along, just for fun.

Marcus: So those are boneheaded errors of reasoning where there’s something obviously amiss. Every time we point these errors out somebody says, “Yeah, but we’ll get more data. We’ll get it fixed.” Well, I’ve been hearing that for almost 30 years. And although there is some progress, the core problems have not changed.

Let’s go back to 2014 when you founded your first AI company, Geometric Intelligence. At that time, I imagine you were feeling more bullish on AI?

Marcus: Yeah, I was a lot more bullish. I was not only more bullish on the technical side. I was also more bullish about people using AI for good. AI used to feel like a small research community of people that really wanted to help the world.

So when did the disillusionment and doubt creep in?

Marcus: In 2018 I already thought deep learning was getting overhyped. That year I wrote this piece called “Deep Learning, a Critical Appraisal,” which Yann LeCun really hated at the time. I already wasn’t happy with this approach and I didn’t think it was likely to succeed. But that’s not the same as being disillusioned, right?

Then when large language models became popular [around 2019], I immediately thought they were a bad idea. I just thought this is the wrong way to pursue AI from a philosophical and technical perspective. And it became clear that the media and some people in machine learning were getting seduced by hype. That bothered me. So I was writing pieces about GPT-3 [an early version of OpenAI's large language model] being a bullshit artist in 2020. As a scientist, I was pretty disappointed in the field at that point. And then things got much worse when ChatGPT came out in 2022, and most of the world lost all perspective. I began to get more and more concerned about misinformation and how large language models were going to potentiate that.

You’ve been concerned not just about the startups, but also the big entrenched tech companies that jumped on the generative AI bandwagon, right? Like Microsoft, which has partnered with OpenAI?

Marcus: The last straw that made me move from doing research in AI to working on policy was when it became clear that Microsoft was going to race ahead no matter what. That was very different from 2016 when they released [an early chatbot named] Tay. It was bad, they took it off the market 12 hours later, and then Brad Smith wrote a book about responsible AI and what they had learned. But by the end of the month of February 2023, it was clear that Microsoft had really changed how they were thinking about this. And then they had this ridiculous “Sparks of AGI” paper, which I think was the ultimate in hype. And they didn’t take down Sydney after the crazy Kevin Roose conversation where [the chatbot] Sydney told him to get a divorce and all this stuff. It just became clear to me that the mood and the values of Silicon Valley had really changed, and not in a good way.

I also became disillusioned with the U.S. government. I think the Biden administration did a good job with its executive order. But it became clear that the Senate was not going to take the action that it needed. I spoke at the Senate in May 2023. At the time, I felt like both parties recognized that we can’t just leave all this to self-regulation. And then I became disillusioned [with Congress] over the course of the last year, and that’s what led to writing this book.

You talk a lot about the risks inherent in today’s generative AI technology. But then you also say, “It doesn’t work very well.” Are those two views coherent?

Marcus: There was a headline: “Gary Marcus Used to Call AI Stupid, Now He Calls It Dangerous.” The implication was that those two things can’t coexist. But in fact, they do coexist. I still think gen AI is stupid, and certainly cannot be trusted or counted on. And yet it is dangerous. And some of the danger actually stems from its stupidity. So for example, it’s not well-grounded in the world, so it’s easy for a bad actor to manipulate it into saying all kinds of garbage. Now, there might be a future AI that might be dangerous for a different reason, because it’s so smart and wily that it outfoxes the humans. But that’s not the current state of affairs.

You’ve said that generative AI is a bubble that will soon burst. Why do you think that?

Marcus: Let’s clarify: I don’t think generative AI is going to disappear. For some purposes, it is a fine method. You want to build autocomplete, it is the best method ever invented. But there’s a financial bubble because people are valuing AI companies as if they’re going to solve artificial general intelligence. In my view, it’s not realistic. I don’t think we’re anywhere near AGI. So then you’re left with, “Okay, what can you do with generative AI?”

Last year, because Sam Altman was such a good salesman, everybody fantasized that we were about to have AGI and that you could use this tool in every aspect of every corporation. And a whole bunch of companies spent a bunch of money testing generative AI out on all kinds of different things. So they spent 2023 doing that. And then what you’ve seen in 2024 are reports where researchers go to the users of Microsoft’s Copilot—not the coding tool, but the more general AI tool—and they’re like, “Yeah, it doesn’t really work that well.” There’s been a lot of reviews like that this last year.

The reality is, right now, the gen AI companies are actually losing money. OpenAI had an operating loss of something like $5 billion last year. Maybe you can sell $2 billion worth of gen AI to people who are experimenting. But unless they adopt it on a permanent basis and pay you a lot more money, it’s not going to work. I started calling OpenAI the possible WeWork of AI after it was valued at $86 billion. The math just didn’t make sense to me.

What would it take to convince you that you’re wrong? What would be the head-spinning moment?

Marcus: Well, I’ve made a lot of different claims, and all of them could be wrong. On the technical side, if someone could get a pure large language model to not hallucinate and to reason reliably all the time, I would be wrong about that very core claim that I have made about how these things work. So that would be one way of refuting me. It hasn’t happened yet, but it’s at least logically possible.

On the financial side, I could easily be wrong. But the thing about bubbles is that they’re mostly a function of psychology. Do I think the market is rational? No. So even if the stuff doesn’t make money for the next five years, people could keep pouring money into it.

The place that I’d like to prove me wrong is the U.S. Senate. They could get their act together, right? I’m running around saying, “They’re not moving fast enough,” but I would love to be proven wrong on that. In the book, I have a list of the 12 biggest risks of generative AI. If the Senate passed something that actually addressed all 12, then my cynicism would have been mislaid. I would feel like I’d wasted a year writing the book, and I would be very, very happy.

Arzeda is using AI to design proteins for natural sweeteners and more

17 September 2024 at 15:00

AI is increasingly being applied to protein design, the process of creating new proteins with specific, target characteristics. Protein design’s applications are myriad, but it’s a promising way of discovering drug-based treatments to combat diseases and creating new homecare, agriculture, food-based, and materials products. One among the many vendors developing AI tech to design proteins, […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Generative AI startup Typeface acquires two companies, Treat and Narrato, to bolster its portfolio

16 September 2024 at 17:00

Typeface, a generative AI startup focused on enterprise use cases, has acquired a pair of companies just over a year after raising $165 million at a $1 billion valuation. Typeface revealed on Monday that it has purchased Treat, a company using AI to create personalized photo products, and Narrato, an AI-powered content creation and management […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Runway announces an API for its video-generating AI models

16 September 2024 at 16:33

Runway, one of several AI startups developing video-generating tech, today announced an API to allow devs and organizations to build the company’s generative AI models into third-party platforms, apps, and services. Currently in limited access (there’s a waitlist), the Runway API only offers a single model to choose from — Gen-3 Alpha Turbo, a faster […]

© 2024 TechCrunch. All rights reserved. For personal use only.

AI coding assistant Supermaven raises cash from OpenAI and Perplexity co-founders

16 September 2024 at 15:00

Supermaven, an AI coding assistant, has raised $12 million in a funding round that had participation from OpenAI and Perplexity co-founders.

© 2024 TechCrunch. All rights reserved. For personal use only.

Oprah just had an AI special with Sam Altman and Bill Gates — here are the highlights

13 September 2024 at 06:03

Late Thursday evening, Oprah Winfrey aired a special on AI, appropriately titled “AI and the Future of Us.” Guests included OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and current FBI director Christopher Wray. The dominant tone was one of skepticism — and wariness. Oprah noted in prepared remarks that the AI genie is out […]

© 2024 TechCrunch. All rights reserved. For personal use only.

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies. That growth has propelled OpenAI itself into […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Ansys SimAI Software Predicts Fully Transient Vehicle Crash Outcomes

By: Ansys
20 August 2024 at 20:09


The Ansys SimAI™ cloud-enabled generative artificial intelligence (AI) platform combines the predictive accuracy of Ansys simulation with the speed of generative AI. Because of the software’s versatile underlying neural networks, it can extend to many types of simulation, including structural applications.
This white paper shows how the SimAI cloud-based software applies to highly nonlinear, transient structural simulations, such as automobile crashes, and includes:

  • Vehicle kinematics and deformation
  • Forces acting upon the vehicle
  • How it interacts with its environment
  • How understanding the changing and rapid sequence of events helps predict outcomes

These simulations can reduce the potential for occupant injuries and the severity of vehicle damage and help understand the crash’s overall dynamics. Ultimately, this leads to safer automotive design.

Download this free whitepaper now!

❌
❌