Normal view

There are new articles available, click to refresh the page.
Today — 19 September 2024Main stream

“Dead Internet theory” comes to life with new AI-powered social media app

19 September 2024 at 00:19
People in a hall of mirrors.

Enlarge (credit: gremlin via Getty Images)

For the past few years, a conspiracy theory called "Dead Internet theory" has picked up speed as large language models (LLMs) like ChatGPT increasingly generate text and even social media interactions found online. The theory says that most social Internet activity today is artificial and designed to manipulate humans for engagement.

On Monday, software developer Michael Sayman launched a new AI-populated social network app called SocialAI that feels like it's bringing that conspiracy theory to life, allowing users to interact solely with AI chatbots instead of other humans. It's available on the iPhone app store, but so far, it's picking up pointed criticism.

After its creator announced SocialAI as "a private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make," computer security specialist Ian Coldwater quipped on X, "This sounds like actual hell." Software developer and frequent AI pundit Colin Fraser expressed a similar sentiment: "I don’t mean this like in a mean way or as a dunk or whatever but this actually sounds like Hell. Like capital H Hell."

Read 11 remaining paragraphs | Comments

Fal.ai, which hosts media-generating AI models, raises $23M from a16z and others

18 September 2024 at 22:14

Fal.ai, a dev-focused platform for AI-generated audio, video, and images, today revealed that it’s raised $23 million in funding from investors including Andreessen Horowitz (a16z), Black Forest Labs co-founder Robin Rombach, and Perplexity CEO Aravind Srinivas. It’s a two-round deal: $14 million of Fal’s total came from a Series A tranche led by Kindred Ventures; […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training

18 September 2024 at 22:10
An illustration of a filmstrip with a robot, horse, rocket, and whale.

Enlarge (credit: Benj Edwards / Malte Mueller via Getty Images)

On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate's vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.

Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate's vice chair, stated in a press release that AI could help develop "cutting edge, capital efficient content creation opportunities." He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.

Runway plans to develop a custom AI model using Lionsgate's proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.

Read 7 remaining paragraphs | Comments

Yesterday — 18 September 2024Main stream

Hawaii hikers report exploding guts as norovirus outbreak hits famous trail

By: Beth Mole
18 September 2024 at 18:39
The Kalalau Valley between sheer cliffs in the Na Pali Coast State Park on the western shore of the island of Kauai in Hawaii, United States. This view is from the Pihea Trail in the Kokee State Park.

Enlarge / The Kalalau Valley between sheer cliffs in the Na Pali Coast State Park on the western shore of the island of Kauai in Hawaii, United States. This view is from the Pihea Trail in the Kokee State Park. (credit: Getty | Jon G. Fuller)

The Hawaiian island of Kauai may not have any spewing lava, but hikers along the magnificent Napali coast have brought their own volcanic action recently, violently hollowing their innards amid the gushing waterfalls and deeply carved valleys.

Between August and early September, at least 50 hikers fell ill with norovirus along the famed Kalalau Trail, which has been closed since September 4 for a deep cleaning. The rugged 11-mile trail runs along the northwest coast of the island, giving adventurers breathtaking views of stunning sea cliffs and Kauai's lush valleys. It's situated just north of Waimea Canyon State Park, also known as the Grand Canyon of the Pacific.

"It’s one of the most beautiful places in the world. I feel really fortunate to be able to be there, and appreciate and respect that land,” one hiker who fell ill in late August told The Washington Post. "My guts exploding all over that land was not what I wanted to do at all."

Read 7 remaining paragraphs | Comments

LinkedIn scraped user data for training before updating its terms of service

18 September 2024 at 19:15

LinkedIn may have trained AI models on user data without updating its terms. LinkedIn users in the U.S. — but not the EU, EEA, or Switzerland, likely due to those regions’ data privacy rules — have an opt-out toggle in their settings screen disclosing that LinkedIn scrapes personal data to train “content creation AI models.” […]

© 2024 TechCrunch. All rights reserved. For personal use only.

This Week in AI: Why OpenAI’s o1 changes the AI regulation game

18 September 2024 at 19:05

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here. It’s been just a few days since OpenAI revealed its latest flagship generative model, o1, to the world. Marketed as a “reasoning” model, o1 essentially takes longer to “think” about questions before answering them, breaking down […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Generative AI startup Runway inks deal with a major Hollywood studio

18 September 2024 at 15:36

Runway, a startup developing AI video tools, including video-generating models, has partnered with Lionsgate — the studio behind the “John Wick” and “Twilight” franchises — to train a custom video model on Lionsgate’s movie catalog. Lionsgate vice chair Michael Burns said in a statement that the studio’s “filmmakers, directors and other creative talent” will get […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Apple Intelligence will support German, Italian, Korean, Portuguese, and Vietnamese in 2025

18 September 2024 at 14:00

Apple announced Wednesday that its generative AI offering will be available in even more languages in 2025. Additions to Apple Intelligence include English (India), English (Singapore), German, Italian, Korean, Portuguese, Vietnamese, and “others” yet to be announced. The feature will launch in American English, when it arrives as part of the iOS 18.1 update. The […]

© 2024 TechCrunch. All rights reserved. For personal use only.

There are more than 120 AI bills in Congress right now

18 September 2024 at 11:30

More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress.

They’re pretty varied. One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training.  Three deal with mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.

The flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. “There is a sense of urgency. There’s a commitment to addressing this issue, because it is developing so quickly and because it is so crucial to our economy,” says Heather Vaughan, director of communications for the US House of Representatives Committee on Science, Space, and Technology.

Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at all the different bills that are in motion can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord. 

As you can see, it can seem as if Congress is trying to do everything at once when it comes to AI. To get a better sense of what may actually pass, it’s useful to look at what bills are moving along to potentially become law. 

A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don’t feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill’s sponsor usually needs to get some members of the opposite party to support it for it to pass. In the current polarized US political climate, that task can be herculean. 

Congress has passed legislation on artificial intelligence before. Back in 2020, the National AI Initiative Act was part of the Defense Authorization Act, which invested resources in AI research and provided support for public education and workforce training on AI.

And some of the current bills are making their way through the system. The Senate Commerce Committee pushed through five AI-related bills at the end of July. The bills focused on authorizing the newly formed US AI Safety Institute (AISI) to create test beds and voluntary guidelines for AI models. The other bills focused on expanding education on AI, establishing public computing resources for AI research, and criminalizing the publication of deepfake pornography. The next step would be to put the bills on the congressional calendar to be voted on, debated, or amended.

“The US AI Safety Institute, as a place to have consortium building and easy collaboration between corporate and civil society actors, is amazing. It’s exactly what we need,” says Yacine Jernite, an AI researcher at Hugging Face.

The progress of these bills is a positive development, says Varun Krovi, executive director of the Center for AI Safety Action Fund. “We need to codify the US AI Safety Institute into law if you want to maintain our leadership on the global stage when it comes to standards development,” he says. “And we need to make sure that we pass a bill that provides computing capacity required for startups, small businesses, and academia to pursue AI.”

Following the Senate’s lead, the House Committee on Science, Space, and Technology just passed nine more bills regarding AI on September 11. Those bills focused on improving education on AI in schools, directing the National Institute of Standards and Technology (NIST) to establish guidelines for artificial-intelligence systems, and expanding the workforce of AI experts. These bills were chosen because they have a narrower focus and thus might not get bogged down in big ideological battles on AI, says Vaughan.

“It was a day that culminated from a lot of work. We’ve had a lot of time to hear from members and stakeholders. We’ve had years of hearings and fact-finding briefings on artificial intelligence,” says Representative Haley Stevens, one of the Democratic members of the House committee.

Many of the bills specify that any guidance they propose for the industry is nonbinding and that the goal is to work with companies to ensure safe development rather than curtail innovation. 

For example, one of the bills from the House, the AI Development Practices Act, directs NIST to establish “voluntary guidance for practices and guidelines relating to the development … of AI systems” and a “voluntary risk management framework.” Another bill, the AI Advancement and Reliability Act, has similar language. It supports “the development of voluntary best practices and technical standards” for evaluating AI systems. 

“Each bill contributes to advancing AI in a safe, reliable, and trustworthy manner while fostering the technology’s growth and progress through innovation and vital R&D,” committee chairman Frank Lucas, an Oklahoma Republican, said in a press release on the bills coming out of the House.

“It’s emblematic of the approach that the US has taken when it comes to tech policy. We hope that we would move on from voluntary agreements to mandating them,” says Krovi.

Avoiding mandates is a practical matter for the House committee. “Republicans don’t go in for mandates for the most part. They generally aren’t going to go for that. So we would have a hard time getting support,” says Vaughan. “We’ve heard concerns about stifling innovation, and that’s not the approach that we want to take.” When MIT Technology Review asked about the origin of these concerns, they were attributed to unidentified “third parties.” 

And fears of slowing innovation don’t just come from the Republican side. “What’s most important to me is that the United States of America is establishing aggressive rules of the road on the international stage,” says Stevens. “It’s concerning to me that actors within the Chinese Communist Party could outpace us on these technological advancements.”

But these bills come at a time when big tech companies have ramped up lobbying efforts on AI. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says David Evan Harris, who teaches courses on AI ethics at the University of California, Berkeley. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.”

“A [voluntary commitment] is something that is also only accessible to the largest companies,” says Jernite at Hugging Face, claiming that sometimes the ambiguous nature of voluntary commitments allows big companies to set definitions for themselves. “If you have a voluntary commitment—that is, ‘We’re going to develop state-of-the-art watermarking technology’—you don’t know what state-of-the-art means. It doesn’t come with any of the concrete things that make regulation work.”

“We are in a very aggressive policy conversation about how to do this right, and how this carrot and stick is actually going to work,” says Stevens, indicating that Congress may ultimately draw red lines that AI companies must not cross.

There are other interesting insights to be gleaned from looking at the bills all together. Two-thirds of the AI bills are sponsored by Democrats. This isn’t too surprising, since some House Republicans have claimed to want no AI regulations, believing that guardrails will slow down progress.

The topics of the bills (as specified by Congress) are dominated by science, tech, and communications (28%), commerce (22%), updating government operations (18%), and national security (9%). Topics that don’t receive much attention include labor and employment (2%), environmental protection (1%), and civil rights, civil liberties, and minority issues (1%).

The lack of a focus on equity and minority issues came into view during the Senate markup session at the end of July. Senator Ted Cruz, a Republican, added an amendment that explicitly prohibits any action “to ensure inclusivity and equity in the creation, design, or development of the technology.” Cruz said regulatory action might slow US progress in AI, allowing the country to fall behind China.

On the House side, there was also a hesitation to work on bills dealing with biases in AI models. “None of our bills are addressing that. That’s one of the more ideological issues that we’re not moving forward on,” says Vaughan.

The lead Democrat on the House committee, Representative Zoe Lofgren, told MIT Technology Review, “It is surprising and disappointing if any of my Republican colleagues have made that comment about bias in AI systems. We shouldn’t tolerate discrimination that’s overt and intentional any more than we should tolerate discrimination that occurs because of bias in AI systems. I’m not really sure how anyone can argue against that.”

After publication, Vaughan clarified that “[Bias] is one of the bigger, more cross-cutting issues, unlike the narrow, practical bills we considered that week. But we do care about bias as an issue,” and she expects it to be addressed within an upcoming House Task Force report.

One issue that may rise above the partisan divide is deepfakes. The Defiance Act, one of several bills addressing them, is cosponsored by a Democratic senator, Amy Klobuchar, and a Republican senator, Josh Hawley. Deepfakes have already been abused in elections; for example, someone faked Joe Biden’s voice for a robocall to tell citizens not to vote. And the technology has been weaponized to victimize people by incorporating their images into pornography without their consent. 

“I certainly think that there is more bipartisan support for action on these issues than on many others,” says Daniel Weiner, director of the Brennan Center’s Elections & Government Program. “But it remains to be seen whether that’s going to win out against some of the more traditional ideological divisions that tend to arise around these issues.” 

Although none of the current slate of bills have resulted in laws yet, the task of regulating any new technology, and specifically advanced AI systems that no one entirely understands, is difficult. The fact that Congress is making any progress at all may be surprising in itself. 

“Congress is not sleeping on this by any stretch of the means,” says Stevens. “We are evaluating and asking the right questions and also working alongside our partners in the Biden-Harris administration to get us to the best place for the harnessing of artificial intelligence.”

Update: We added further comments from the Republican spokesperson.

Due to AI fakes, the “deep doubt” era is here

18 September 2024 at 11:00
A person writing

Enlarge (credit: Memento | Aurich Lawson)

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind "deep doubt" isn't new, but its real-world impact is becoming increasingly apparent. Since the term "deepfake" first surfaced in 2017, we've seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump's baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried "AI" again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Read 22 remaining paragraphs | Comments

Governor Newsom on California AI bill SB 1047: ‘I can’t solve for everything’

18 September 2024 at 06:31

California Governor Gavin Newsom said there are 38 bills on his desk that would create laws around artificial intelligence on Tuesday, but one looms larger than all of them: SB 1047, California’s bill that tries to prevent AI systems from causing catastrophes. For the first time, California’s Governor shared how he’s thinking about the controversial […]

© 2024 TechCrunch. All rights reserved. For personal use only.

California’s 5 new AI laws crack down on election deepfakes and actor clones

18 September 2024 at 02:28

On Tuesday, California Governor Gavin Newsom signed some of America’s toughest laws yet regulating the artificial intelligence sector. Three of these laws crack down on AI deepfakes that could influence elections, while two others prohibit Hollywood studios from creating an AI clone of an actor’s body or voice without their consent. “Home to the majority […]

© 2024 TechCrunch. All rights reserved. For personal use only.

BlackRock and Microsoft are reportedly planning a $30B AI-focused megafund

17 September 2024 at 23:15

Investment powerhouse BlackRock is set to launch a massive AI-focused fund, exceeding $30 billion, in collaboration with Microsoft and the Abu Dhabi-backed investment outfit MGX, the FT reported today. According to the outlet, the fund — among Wall Street’s largest — will focus on creating data centers and funding energy infrastructure to support AI. Chip […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Before yesterdayMain stream

Google seeks authenticity in the age of AI with new content labeling system

17 September 2024 at 22:07
Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA.

Enlarge / Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA. But even as a real photo, does it actually represent reality, and is there a technological solution to that problem? (credit: Smile via Getty Images)

On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it's an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.

A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they've been modified.

Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant's "About this image" feature in Google Search, Lens, and Circle to Search will display this information when available.

Read 9 remaining paragraphs | Comments

Mistral launches a free tier for developers to test its AI models

17 September 2024 at 21:11

Mistral AI launched a new free tier to let developers fine-tune and build test apps with the startup’s AI models, the company announced in a blog post Tuesday. The startup also slashed prices for developers to access its AI models through API endpoints and added image processing to its free consumer AI chatbot, le Chat. […]

© 2024 TechCrunch. All rights reserved. For personal use only.

AWS shuts down DeepComposer, its MIDI keyboard for AI music

17 September 2024 at 19:51

AWS’ weird AI-powered keyboard experiment, DeepComposer, is no more. In a blog post today, the company announced it’s shutting down the 5-year-old DeepComposer, a physical MIDI piano and AWS service that let users compose songs with the help of generative AI. “After careful consideration, we have made the decision to end support for AWS DeepComposer,” […]

© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌