Normal view

There are new articles available, click to refresh the page.
Yesterday — 18 September 2024Main stream

Due to AI fakes, the “deep doubt” era is here

18 September 2024 at 11:00
A person writing

Enlarge (credit: Memento | Aurich Lawson)

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind "deep doubt" isn't new, but its real-world impact is becoming increasingly apparent. Since the term "deepfake" first surfaced in 2017, we've seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump's baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried "AI" again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Read 22 remaining paragraphs | Comments

California’s 5 new AI laws crack down on election deepfakes and actor clones

18 September 2024 at 02:28

On Tuesday, California Governor Gavin Newsom signed some of America’s toughest laws yet regulating the artificial intelligence sector. Three of these laws crack down on AI deepfakes that could influence elections, while two others prohibit Hollywood studios from creating an AI clone of an actor’s body or voice without their consent. “Home to the majority […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Before yesterdayMain stream

White House extracts voluntary commitments from AI vendors to combat deepfake nudes

12 September 2024 at 17:14

The White House says several major AI vendors have committed to taking steps to combat nonconsensual deepfakes and child sexual abuse material. Adobe, Cohere, Microsoft, Anthropic OpenAI, and data provider Common Crawl said that they’ll “responsibly” source and safeguard the datasets they create and use to train AI from image-based sexual abuse. These organizations — […]

© 2024 TechCrunch. All rights reserved. For personal use only.

My dead father is “writing” me notes again

12 September 2024 at 13:00
An AI-generated image featuring Dad's Uppercase handwriting.

Enlarge / An AI-generated image featuring my late father's handwriting. (credit: Benj Edwards / Flux)

Growing up, if I wanted to experiment with something technical, my dad made it happen. We shared dozens of tech adventures together, but those adventures were cut short when he died of cancer in 2013. Thanks to a new AI image generator, it turns out that my dad and I still have one more adventure to go.

Recently, an anonymous AI hobbyist discovered that an image synthesis model called Flux can reproduce someone's handwriting very accurately if specially trained to do so. I decided to experiment with the technique using written journals my dad left behind. The results astounded me and raised deep questions about ethics, the authenticity of media artifacts, and the personal meaning behind handwriting itself.

Beyond that, I'm also happy that I get to see my dad's handwriting again. Captured by a neural network, part of him will live on in a dynamic way that was impossible a decade ago. It's been a while since he died, and I am no longer grieving. From my perspective, this is a celebration of something great about my dad—reviving the distinct way he wrote and what that conveys about who he was.

Read 43 remaining paragraphs | Comments

Taylor Swift cites AI deepfakes in endorsement for Kamala Harris

11 September 2024 at 21:56
A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024.

Enlarge / A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024. (credit: Taylor Swift / Instagram)

On Tuesday night, Taylor Swift endorsed Vice President Kamala Harris for US President on Instagram, citing concerns over AI-generated deepfakes as a key motivator. The artist's warning aligns with current trends in technology, especially in an era where AI synthesis models can easily create convincing fake images and videos.

"Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site," she wrote in her Instagram post. "It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth."

In August 2024, former President Donald Trump posted AI-generated images on Truth Social falsely suggesting Swift endorsed him, including a manipulated photo depicting Swift as Uncle Sam with text promoting Trump. The incident sparked Swift's fears about the spread of misinformation through AI.

Read 1 remaining paragraphs | Comments

Taylor Swift cites ‘fears around AI’ as she endorses the Democratic ticket

11 September 2024 at 19:32

After a historic presidential debate replete with discourse about eating pets, Taylor Swift ended the evening with a bang. Arguably the most powerful figure in American pop culture, the singer-songwriter chose debate night to announce on Instagram that she plans to vote for Kamala Harris in the presidential election. Swift’s endorsement is monumental. She holds […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Deepfake Porn Is Leading to a New Protection Industry



It’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s generative AI tools. A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free.

The world took notice of this new reality in January when graphic deepfake images of Taylor Swift circulated on social media platforms, with one image receiving 47 million views before it was removed. Others in the entertainment industry, most notably Korean pop stars, have also seen their images taken and misused—but so have people far from the public spotlight. There’s one thing that virtually all the victims have in common, though: According to the 2023 report, 99 percent of victims are women or girls.

This dire situation is spurring action, largely from women who are fed up. As one startup founder, Nadia Lee, puts it: “If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” While there’s been considerable research on deepfake detectors, they struggle to keep up with deepfake generation tools. What’s more, detectors help only if a platform is interested in screening out deepfakes, and most deepfake porn is hosted on sites dedicated to that genre.

“Our generation is facing its own Oppenheimer moment,” says Lee, CEO of the Australia-based startup That’sMyFace. “We built this thing”—that is, generative AI—”and we could go this way or that way with it.” Lee’s company is first offering visual-recognition tools to corporate clients who want to be sure their logos, uniforms, or products aren’t appearing in pornography (think, for example, of airline stewardesses). But her long-term goal is to create a tool that any woman can use to scan the entire Internet for deepfake images or videos bearing her own face.

“If safety tech doesn’t accelerate at the same pace as AI development, then we are screwed.” —Nadia Lee, That’sMyFace

Another startup founder had a personal reason for getting involved. Breeze Liu was herself a victim of deepfake pornography in 2020; she eventually found more than 800 links leading to the fake video. She felt humiliated, she says, and was horrified to find that she had little recourse: The police said they couldn’t do anything, and she herself had to identify all the sites where the video appeared and petition to get it taken down—appeals that were not always successful. There had to be a better way, she thought. “We need to use AI to combat AI,” she says.

Liu, who was already working in tech, founded Alecto AI, a startup named after a Greek goddess of vengeance. The app she’s building lets users deploy facial recognition to check for wrongful use of their own image across the major social media platforms (she’s not considering partnerships with porn platforms). Liu aims to partner with the social media platforms so her app can also enable immediate removal of offending content. “If you can’t remove the content, you’re just showing people really distressing images and creating more stress,” she says.

Liu says she’s currently negotiating with Meta about a pilot program, which she says will benefit the platform by providing automated content moderation. Thinking bigger, though, she says the tool could become part of the “infrastructure for online identity,” letting people check also for things like fake social media profiles or dating site profiles set up with their image.

Can Regulations Combat Deepfake Porn?

Removing deepfake material from social media platforms is hard enough—removing it from porn platforms is even harder. To have a better chance of forcing action, advocates for protection against image-based sexual abuse think regulations are required, though they differ on what kind of regulations would be most effective.

Susanna Gibson started the nonprofit MyOwn after her own deepfake horror story. She was running for a seat in the Virginia House of Delegates in 2023 when the official Republican party of Virginia mailed out sexual imagery of her that had been created and shared without her consent, including, she says, screenshots of deepfake porn. After she narrowly lost the election, she devoted herself to leading the legislative charge in Virginia and then nationwide to fight back against image-based sexual abuse.

“The problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” —Susanna Gibson, MyOwn

Her first win was a bill that the Virginia governor signed in April to expand the state’s existing “revenge porn” law to cover more types of imagery. “It’s nowhere near what I think it should be, but it’s a step in the right direction of protecting people,” Gibson says.

While several federal bills have been introduced to explicitly criminalize the nonconsensual distribution of intimate imagery or deepfake porn in particular, Gibson says she doesn’t have great hopes of those bills becoming the law of the land. There’s more action at the state level, she says.

“Right now there are 49 states, plus D.C., that have legislation against nonconsensual distribution of intimate imagery,” Gibson says.But the problem is that each state is different, so it’s a patchwork of laws. And some are significantly better than others.” Gibson notes that almost all of the laws require proof that the perpetrator acted with intent to harass or intimidate the victim, which can be very hard to prove.

Among the different laws, and the proposals for new laws, there’s considerable disagreement about whether the distribution of deepfake porn should be considered a criminal or civil matter. And if it’s civil, which means that victims have the right to sue for damages, there’s disagreement about whether the victims should be able to sue the individuals who distributed the deepfake porn or the platforms that hosted it.

Beyond the United States is an even larger patchwork of policies. In the United Kingdom, the Online Safety Act passed in 2023 criminalized the distribution of deepfake porn, and an amendment proposed this year may criminalize its creation as well. The European Union recently adopted a directive that combats violence and cyberviolence against women, which includes the distribution of deepfake porn, but member states have until 2027 to implement the new rules. In Australia, a 2021 law made it a civil offense to post intimate images without consent, but a newly proposed law aims to make it a criminal offense, and also aims to explicitly address deepfake images. South Korea has a law that directly addresses deepfake material, and unlike many others, it doesn’t require proof of malicious intent. China has a comprehensive law restricting the distribution of “synthetic content,” but there’s been no evidence of the government using the regulations to crack down on deepfake porn.

While women wait for regulatory action, services from companies like Alecto AI and That’sMyFace may fill the gaps. But the situation calls to mind the rape whistles that some urban women carry in their purses so they’re ready to summon help if they’re attacked in a dark alley. It’s useful to have such a tool, sure, but it would be better if our society cracked down on sexual predation in all its forms, and tried to make sure that the attacks don’t happen in the first place.


❌
❌