Normal view

There are new articles available, click to refresh the page.
Today — 19 September 2024Main stream

“Dead Internet theory” comes to life with new AI-powered social media app

19 September 2024 at 00:19
People in a hall of mirrors.

Enlarge (credit: gremlin via Getty Images)

For the past few years, a conspiracy theory called "Dead Internet theory" has picked up speed as large language models (LLMs) like ChatGPT increasingly generate text and even social media interactions found online. The theory says that most social Internet activity today is artificial and designed to manipulate humans for engagement.

On Monday, software developer Michael Sayman launched a new AI-populated social network app called SocialAI that feels like it's bringing that conspiracy theory to life, allowing users to interact solely with AI chatbots instead of other humans. It's available on the iPhone app store, but so far, it's picking up pointed criticism.

After its creator announced SocialAI as "a private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make," computer security specialist Ian Coldwater quipped on X, "This sounds like actual hell." Software developer and frequent AI pundit Colin Fraser expressed a similar sentiment: "I don’t mean this like in a mean way or as a dunk or whatever but this actually sounds like Hell. Like capital H Hell."

Read 11 remaining paragraphs | Comments

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training

18 September 2024 at 22:10
An illustration of a filmstrip with a robot, horse, rocket, and whale.

Enlarge (credit: Benj Edwards / Malte Mueller via Getty Images)

On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate's vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.

Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate's vice chair, stated in a press release that AI could help develop "cutting edge, capital efficient content creation opportunities." He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.

Runway plans to develop a custom AI model using Lionsgate's proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.

Read 7 remaining paragraphs | Comments

Massive China-state IoT botnet went undetected for four years—until now

18 September 2024 at 21:58
Massive China-state IoT botnet went undetected for four years—until now

Enlarge (credit: Getty Images)

The FBI has dismantled a massive network of compromised devices that Chinese state-sponsored hackers have used for four years to mount attacks on government agencies, telecoms, defense contractors, and other targets in the US and Taiwan.

The botnet was made up primarily of small office and home office routers, surveillance cameras, network-attached storage, and other Internet-connected devices located all over the world. Over the past four years, US officials said, 260,000 such devices have cycled through the sophisticated network, which is organized in three tiers that allow the botnet to operate with efficiency and precision. At its peak in June 2023, Raptor Train, as the botnet is named, consisted of more than 60,000 commandeered devices, according to researchers from Black Lotus Labs, making it the largest China state botnet discovered to date.

Burning down the house

Raptor Train is the second China state-operated botnet US authorities have taken down this year. In January, law enforcement officials covertly issued commands to disinfect Internet of Things devices that hackers backed by the Chinese government had taken over without the device owners’ knowledge. The Chinese hackers, part of a group tracked as Volt Typhoon, used the botnet for more than a year as a platform to deliver exploits that burrowed deep into the networks of targets of interest. Because the attacks appear to originate from IP addresses with good reputations, they are subjected to less scrutiny from network security defenses, making the bots an ideal delivery proxy. Russia-state hackers have also been caught assembling large IoT botnets for the same purposes.

Read 13 remaining paragraphs | Comments

Yesterday — 18 September 2024Main stream

Due to AI fakes, the “deep doubt” era is here

18 September 2024 at 11:00
A person writing

Enlarge (credit: Memento | Aurich Lawson)

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind "deep doubt" isn't new, but its real-world impact is becoming increasingly apparent. Since the term "deepfake" first surfaced in 2017, we've seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump's baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried "AI" again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Read 22 remaining paragraphs | Comments

Before yesterdayMain stream

Google seeks authenticity in the age of AI with new content labeling system

17 September 2024 at 22:07
Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA.

Enlarge / Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA. But even as a real photo, does it actually represent reality, and is there a technological solution to that problem? (credit: Smile via Getty Images)

On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it's an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.

A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they've been modified.

Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant's "About this image" feature in Google Search, Lens, and Circle to Search will display this information when available.

Read 9 remaining paragraphs | Comments

Ban warnings fly as users dare to probe the “thoughts” of OpenAI’s latest model

17 September 2024 at 00:49
An illustration of gears shaped like a brain.

Enlarge (credit: Andriy Onufriyenko via Getty Images)

OpenAI truly does not want you to know what its latest AI model is "thinking." Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

Read 10 remaining paragraphs | Comments

Secure Boot-neutering PKfail debacle is more prevalent than anyone knew

17 September 2024 at 00:13
Secure Boot-neutering PKfail debacle is more prevalent than anyone knew

Enlarge (credit: Getty Images)

A supply chain failure that compromises Secure Boot protections on computing devices from across the device-making industry extends to a much larger number of models than previously known, including those used in ATMs, point-of-sale terminals, and voting machines.

The debacle was the result of non-production test platform keys used in hundreds of device models for more than a decade. These cryptographic keys form the root-of-trust anchor between the hardware device and the firmware that runs on it. The test production keys—stamped with phrases such as “DO NOT TRUST” in the certificates—were never intended to be used in production systems. A who's-who list of device makers—including Acer, Dell, Gigabyte, Intel, Supermicro, Aopen, Foremelife, Fujitsu, HP, and Lenovo—used them anyway.

Medical devices, gaming consoles, ATMs, POS terminals

Platform keys provide the root-of-trust anchor in the form of a cryptographic key embedded into the system firmware. They establish the trust between the platform hardware and the firmware that runs on it. This, in turn, provides the foundation for Secure Boot, an industry standard for cryptographically enforcing security in the pre-boot environment of a device. Built into the UEFI (Unified Extensible Firmware Interface), Secure Boot uses public-key cryptography to block the loading of any code that isn’t signed with a pre-approved digital signature.

Read 9 remaining paragraphs | Comments

Omnipresent AI cameras will ensure good behavior, says Larry Ellison

16 September 2024 at 17:22
A colorized photo of CCTV cameras in London, 2024.

Enlarge (credit: Benj Edwards / Mike Kemp via Getty Images)

On Thursday, Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider. During an investor Q&A, Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don't break the law.

Ellison, who briefly became the world's second-wealthiest person last week when his net worth surpassed Jeff Bezos' for a short time, outlined a scenario where AI models would analyze footage from security cameras, police body cams, doorbell cameras, and vehicle dash cams.

"Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. "We're going to have supervision," he continued. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person."

Read 8 remaining paragraphs | Comments

1.3 million Android-based TV boxes backdoored; researchers still don’t know how

13 September 2024 at 22:20
1.3 million Android-based TV boxes backdoored; researchers still don’t know how

Enlarge (credit: Getty Images)

Researchers still don’t know the cause of a recently discovered malware infection affecting almost 1.3 million streaming devices running an open source version of Android in almost 200 countries.

Security firm Doctor Web reported Thursday that malware named Android.Vo1d has backdoored the Android-based boxes by putting malicious components in their system storage area, where they can be updated with additional malware at any time by command-and-control servers. Google representatives said the infected devices are running operating systems based on the Android Open Source Project, a version overseen by Google but distinct from Android TV, a proprietary version restricted to licensed device makers.

Dozens of variants

Although Doctor Web has a thorough understanding of Vo1d and the exceptional reach it has achieved, company researchers say they have yet to determine the attack vector that has led to the infections.

Read 10 remaining paragraphs | Comments

Google rolls out voice-powered AI chat to the Android masses

13 September 2024 at 19:37
The Google Gemini logo.

Enlarge / The Google Gemini logo. (credit: Google)

On Thursday, Google made Gemini Live, its voice-based AI chatbot feature, available for free to all Android users. The feature allows users to interact with Gemini through voice commands on their Android devices. That's notable because competitor OpenAI's Advanced Voice Mode feature of ChatGPT, which is similar to Gemini Live, has not yet fully shipped.

Google unveiled Gemini Live during its Pixel 9 launch event last month. Initially, the feature was exclusive to Gemini Advanced subscribers, but now it's accessible to anyone using the Gemini app or its overlay on Android.

Gemini Live enables users to ask questions aloud and even interrupt the AI's responses mid-sentence. Users can choose from several voice options for Gemini's responses, adding a level of customization to the interaction.

Read 4 remaining paragraphs | Comments

Free Starlink Internet is coming to all of United’s airplanes

13 September 2024 at 16:01
A child plays with a handheld games console while sitting in an airplane seat

Enlarge / Soon you'll be able to stream games and video for free on United flights. (credit: United)

United Airlines announced this morning that it is giving its in-flight Internet access an upgrade. It has signed a deal with Starlink to deliver SpaceX's satellite-based service to all its aircraft, a process that will start in 2025. And the good news for passengers is that the in-flight Wi-Fi will be free of charge.

The flying experience as it relates to consumer technology has come a very long way in the two-and-a-bit decades that Ars has been publishing. At the turn of the century, even having a power socket in your seat was a long shot. Laptop batteries didn't last that long, either—usually less than the runtime of whatever DVD I hoped to distract myself with, if memory serves.

Bring a spare battery and that might double, but it helped to have a book or magazine to read.

Read 5 remaining paragraphs | Comments

OpenAI’s new “reasoning” AI models are here: o1-preview and o1-mini

12 September 2024 at 21:01
An illustration of a strawberry made out of pixel-like blocks.

Enlarge (credit: Vlatko Gasparic via Getty Images)

OpenAI finally unveiled its rumored "Strawberry" AI language model on Thursday, claiming significant improvements in what it calls "reasoning" and problem-solving capabilities over previous large language models (LLMs). Formally named "OpenAI o1," the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and certain API users.

OpenAI claims that o1-preview outperforms its predecessor, GPT-4o, on multiple benchmarks, including competitive programming, mathematics, and "scientific reasoning." However, people who have used the model say it does not yet outclass GPT-4o in every metric. Other users have criticized the delay in receiving a response from the model, owing to the multi-step processing occurring behind the scenes before answering a query.

In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, "There's a lot of o1 hype on my feed, so I'm worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it'll only get better. (I'm personally psyched about the model's potential & trajectory!) what o1 isn't (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today's launch—but we're working to get there!"

Read 18 remaining paragraphs | Comments

Music industry’s 1990s hard drives, like all HDDs, are dying

12 September 2024 at 20:27
Hard drive seemingly exploding in flames and particles

Enlarge / Hard drives, unfortunately, tend to die not with a spectacular and sparkly bang, but with a head-is-stuck whimper. (credit: Getty Images)

One of the things enterprise storage and destruction company Iron Mountain does is handle the archiving of the media industry's vaults. What it has been seeing lately should be a wake-up call: roughly one-fifth of the hard disk drives dating to the 1990s it was sent are entirely unreadable.

Music industry publication Mix spoke with the people in charge of backing up the entertainment industry. The resulting tale is part explainer on how music is so complicated to archive now, part warning about everyone's data stored on spinning disks.

"In our line of work, if we discover an inherent problem with a format, it makes sense to let everybody know," Robert Koszela, global director for studio growth and strategic initiatives at Iron Mountain, told Mix. "It may sound like a sales pitch, but it's not; it's a call for action."

Read 8 remaining paragraphs | Comments

My dead father is “writing” me notes again

12 September 2024 at 13:00
An AI-generated image featuring Dad's Uppercase handwriting.

Enlarge / An AI-generated image featuring my late father's handwriting. (credit: Benj Edwards / Flux)

Growing up, if I wanted to experiment with something technical, my dad made it happen. We shared dozens of tech adventures together, but those adventures were cut short when he died of cancer in 2013. Thanks to a new AI image generator, it turns out that my dad and I still have one more adventure to go.

Recently, an anonymous AI hobbyist discovered that an image synthesis model called Flux can reproduce someone's handwriting very accurately if specially trained to do so. I decided to experiment with the technique using written journals my dad left behind. The results astounded me and raised deep questions about ethics, the authenticity of media artifacts, and the personal meaning behind handwriting itself.

Beyond that, I'm also happy that I get to see my dad's handwriting again. Captured by a neural network, part of him will live on in a dynamic way that was impossible a decade ago. It's been a while since he died, and I am no longer grieving. From my perspective, this is a celebration of something great about my dad—reviving the distinct way he wrote and what that conveys about who he was.

Read 43 remaining paragraphs | Comments

As quantum computing threats loom, Microsoft updates its core crypto library

12 September 2024 at 02:20
As quantum computing threats loom, Microsoft updates its core crypto library

Enlarge (credit: Getty Images)

Microsoft has updated a key cryptographic library with two new encryption algorithms designed to withstand attacks from quantum computers.

The updates were made last week to SymCrypt, a core cryptographic code library for handing cryptographic functions in Windows and Linux. The library, started in 2006, provides operations and algorithms developers can use to safely implement secure encryption, decryption, signing, verification, hashing, and key exchange in the apps they create. The library supports federal certification requirements for cryptographic modules used in some governmental environments.

Massive overhaul underway

Despite the name, SymCrypt supports both symmetric and asymmetric algorithms. It’s the main cryptographic library Microsoft uses in products and services including Azure, Microsoft 365, all supported versions of Windows, Azure Stack HCI, and Azure Linux. The library provides cryptographic security used in email security, cloud storage, web browsing, remote access, and device management. Microsoft documented the update in a post on Monday.

Read 14 remaining paragraphs | Comments

Taylor Swift cites AI deepfakes in endorsement for Kamala Harris

11 September 2024 at 21:56
A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024.

Enlarge / A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024. (credit: Taylor Swift / Instagram)

On Tuesday night, Taylor Swift endorsed Vice President Kamala Harris for US President on Instagram, citing concerns over AI-generated deepfakes as a key motivator. The artist's warning aligns with current trends in technology, especially in an era where AI synthesis models can easily create convincing fake images and videos.

"Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site," she wrote in her Instagram post. "It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth."

In August 2024, former President Donald Trump posted AI-generated images on Truth Social falsely suggesting Swift endorsed him, including a manipulated photo depicting Swift as Uncle Sam with text promoting Trump. The incident sparked Swift's fears about the spread of misinformation through AI.

Read 1 remaining paragraphs | Comments

Rogue WHOIS server gives researcher superpowers no one should ever have

11 September 2024 at 12:00
Rogue WHOIS server gives researcher superpowers no one should ever have

Enlarge (credit: Aurich Lawson | Getty Images)

It’s not every day that a security researcher acquires the ability to generate counterfeit HTTPS certificates, track email activity, and the position to execute code of his choice on thousands of servers—all in a single blow that cost only $20 and a few minutes to land. But that’s exactly what happened recently to Benjamin Harris.

Harris, the CEO and founder of security firm watchTowr, did all of this by registering the domain dotmobiregistry.net. The domain was once the official home of the authoritative WHOIS server for .mobi, a top-level domain used to indicate that a website is optimized for mobile devices. At some point—it’s not clear precisely when—this WHOIS server, which acts as the official directory for every domain ending in .mobi, was relocated, from whois.dotmobiregistry.net to whois.nic.mobi. While retreating to his Las Vegas hotel room during last month’s Black Hat security conference in Las Vegas, Harris noticed that the previous dotmobiregistry.net owners had allowed the domain to expire. He then scooped it up and set up his own .mobi WHOIS server there.

Misplaced trust

To Harris’s surprise, his server received queries from slightly more than 76,000 unique IP addresses within a few hours of setting it up. Over five days, it received roughly 2.5 million queries from about 135,000 unique systems. The entities behind the systems querying his deprecated domain included a who’s who of Internet heavyweights comprising domain registrars, providers of online security tools, governments from the US and around the world, universities, and certificate authorities, the entities that issue browser-trusted TLS certificates that make HTTPS work.

Read 18 remaining paragraphs | Comments

Found: 280 Android apps that use OCR to steal cryptocurrency credentials

6 September 2024 at 22:23
Found: 280 Android apps that use OCR to steal cryptocurrency credentials

Enlarge (credit: Getty Images)

Researchers have discovered more than 280 malicious apps for Android that use optical character recognition to steal cryptocurrency wallet credentials from infected devices.

The apps masquerade as official ones from banks, government services, TV streaming services, and utilities. In fact, they scour infected phones for text messages, contacts, and all stored images and surreptitiously send them to remote servers controlled by the app developers. The apps are available from malicious sites and are distributed in phishing messages sent to targets. There’s no indication that any of the apps were available through Google Play.

A high level of sophistication

The most notable thing about the newly discovered malware campaign is that the threat actors behind it are employing optical character recognition software in an attempt to extract cryptocurrency wallet credentials that are shown in images stored on infected devices. Many wallets allow users to protect their wallets with a series of random words. The mnemonic credentials are easier for most people to remember than the jumble of characters that appear in the private key. Words are also easier for humans to recognize in images.

Read 9 remaining paragraphs | Comments

Nvidia’s AI chips are cheaper to rent in China than US

6 September 2024 at 20:31
Nvidia’s AI chips are cheaper to rent in China than US

Enlarge (credit: VGG | Getty Images)

The cost of renting cloud services using Nvidia’s leading artificial intelligence chips is lower in China than in the US, a sign that the advanced processors are easily reaching the Chinese market despite Washington’s export restrictions.

Four small-scale Chinese cloud providers charge local tech groups roughly $6 an hour to use a server with eight Nvidia A100 processors in a base configuration, companies and customers told the Financial Times. Small cloud vendors in the US charge about $10 an hour for the same setup.

The low prices, according to people in the AI and cloud industry, are an indication of plentiful supply of Nvidia chips in China and the circumvention of US measures designed to prevent access to cutting-edge technologies.

Read 19 remaining paragraphs | Comments

❌
❌