Normal view

There are new articles available, click to refresh the page.
Yesterday — 19 September 2024Main stream

AI’s hungry maw drives massive $100B investment plan by Microsoft and BlackRock

19 September 2024 at 18:52
An illustration of two robot arms stacking gold coins.

Enlarge (credit: J Studios via Getty Images)

If you haven't noticed by now, Big Tech companies have been making plans to invest in the infrastructure necessary to deliver generative AI products like ChatGPT (and beyond) to hundreds of millions of people around the world. That push involves building more AI-accelerating chips, more data centers, and even new nuclear plants to power those data centers, in some cases.

Along those lines, Microsoft, BlackRock, Global Infrastructure Partners (GIP), and MGX announced a massive new AI investment partnership on Tuesday called the Global AI Infrastructure Investment Partnership (GAIIP). The partnership initially aims to raise $30 billion in private equity capital, which could later turn into $100 billion in total investment when including debt financing.

The group will invest in data centers and supporting power infrastructure for AI development. "The capital spending needed for AI infrastructure and the new energy to power it goes beyond what any single company or government can finance," Microsoft President Brad Smith said in a statement.

Read 6 remaining paragraphs | Comments

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training

18 September 2024 at 22:10
An illustration of a filmstrip with a robot, horse, rocket, and whale.

Enlarge (credit: Benj Edwards / Malte Mueller via Getty Images)

On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate's vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.

Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate's vice chair, stated in a press release that AI could help develop "cutting edge, capital efficient content creation opportunities." He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.

Runway plans to develop a custom AI model using Lionsgate's proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.

Read 7 remaining paragraphs | Comments

Before yesterdayMain stream

Due to AI fakes, the “deep doubt” era is here

19 September 2024 at 22:00
A person writing

Enlarge (credit: Memento | Aurich Lawson)

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind "deep doubt" isn't new, but its real-world impact is becoming increasingly apparent. Since the term "deepfake" first surfaced in 2017, we've seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump's baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried "AI" again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Read 22 remaining paragraphs | Comments

Google seeks authenticity in the age of AI with new content labeling system

17 September 2024 at 22:07
Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA.

Enlarge / Under C2PA, this stock image would be labeled as a real photograph if the camera used to take it, and the toolchain for retouching it, supported the C2PA. But even as a real photo, does it actually represent reality, and is there a technological solution to that problem? (credit: Smile via Getty Images)

On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it's an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.

A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they've been modified.

Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant's "About this image" feature in Google Search, Lens, and Circle to Search will display this information when available.

Read 9 remaining paragraphs | Comments

Ban warnings fly as users dare to probe the “thoughts” of OpenAI’s latest model

17 September 2024 at 00:49
An illustration of gears shaped like a brain.

Enlarge (credit: Andriy Onufriyenko via Getty Images)

OpenAI truly does not want you to know what its latest AI model is "thinking." Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to any user who tries to probe how the model works.

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. When users ask an "o1" model a question in ChatGPT, users have the option of seeing this chain-of-thought process written out in the ChatGPT interface. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model.

Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets. There have been early reports of some successes, but nothing has yet been strongly confirmed.

Read 10 remaining paragraphs | Comments

Omnipresent AI cameras will ensure good behavior, says Larry Ellison

16 September 2024 at 17:22
A colorized photo of CCTV cameras in London, 2024.

Enlarge (credit: Benj Edwards / Mike Kemp via Getty Images)

On Thursday, Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider. During an investor Q&A, Ellison described a world where artificial intelligence systems would constantly monitor citizens through an extensive network of cameras and drones, stating this would ensure both police and citizens don't break the law.

Ellison, who briefly became the world's second-wealthiest person last week when his net worth surpassed Jeff Bezos' for a short time, outlined a scenario where AI models would analyze footage from security cameras, police body cams, doorbell cameras, and vehicle dash cams.

"Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. "We're going to have supervision," he continued. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person."

Read 8 remaining paragraphs | Comments

Google rolls out voice-powered AI chat to the Android masses

13 September 2024 at 19:37
The Google Gemini logo.

Enlarge / The Google Gemini logo. (credit: Google)

On Thursday, Google made Gemini Live, its voice-based AI chatbot feature, available for free to all Android users. The feature allows users to interact with Gemini through voice commands on their Android devices. That's notable because competitor OpenAI's Advanced Voice Mode feature of ChatGPT, which is similar to Gemini Live, has not yet fully shipped.

Google unveiled Gemini Live during its Pixel 9 launch event last month. Initially, the feature was exclusive to Gemini Advanced subscribers, but now it's accessible to anyone using the Gemini app or its overlay on Android.

Gemini Live enables users to ask questions aloud and even interrupt the AI's responses mid-sentence. Users can choose from several voice options for Gemini's responses, adding a level of customization to the interaction.

Read 4 remaining paragraphs | Comments

OpenAI’s new “reasoning” AI models are here: o1-preview and o1-mini

12 September 2024 at 21:01
An illustration of a strawberry made out of pixel-like blocks.

Enlarge (credit: Vlatko Gasparic via Getty Images)

OpenAI finally unveiled its rumored "Strawberry" AI language model on Thursday, claiming significant improvements in what it calls "reasoning" and problem-solving capabilities over previous large language models (LLMs). Formally named "OpenAI o1," the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and certain API users.

OpenAI claims that o1-preview outperforms its predecessor, GPT-4o, on multiple benchmarks, including competitive programming, mathematics, and "scientific reasoning." However, people who have used the model say it does not yet outclass GPT-4o in every metric. Other users have criticized the delay in receiving a response from the model, owing to the multi-step processing occurring behind the scenes before answering a query.

In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, "There's a lot of o1 hype on my feed, so I'm worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it'll only get better. (I'm personally psyched about the model's potential & trajectory!) what o1 isn't (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today's launch—but we're working to get there!"

Read 18 remaining paragraphs | Comments

My dead father is “writing” me notes again

12 September 2024 at 13:00
An AI-generated image featuring Dad's Uppercase handwriting.

Enlarge / An AI-generated image featuring my late father's handwriting. (credit: Benj Edwards / Flux)

Growing up, if I wanted to experiment with something technical, my dad made it happen. We shared dozens of tech adventures together, but those adventures were cut short when he died of cancer in 2013. Thanks to a new AI image generator, it turns out that my dad and I still have one more adventure to go.

Recently, an anonymous AI hobbyist discovered that an image synthesis model called Flux can reproduce someone's handwriting very accurately if specially trained to do so. I decided to experiment with the technique using written journals my dad left behind. The results astounded me and raised deep questions about ethics, the authenticity of media artifacts, and the personal meaning behind handwriting itself.

Beyond that, I'm also happy that I get to see my dad's handwriting again. Captured by a neural network, part of him will live on in a dynamic way that was impossible a decade ago. It's been a while since he died, and I am no longer grieving. From my perspective, this is a celebration of something great about my dad—reviving the distinct way he wrote and what that conveys about who he was.

Read 43 remaining paragraphs | Comments

Taylor Swift cites AI deepfakes in endorsement for Kamala Harris

11 September 2024 at 21:56
A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024.

Enlarge / A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024. (credit: Taylor Swift / Instagram)

On Tuesday night, Taylor Swift endorsed Vice President Kamala Harris for US President on Instagram, citing concerns over AI-generated deepfakes as a key motivator. The artist's warning aligns with current trends in technology, especially in an era where AI synthesis models can easily create convincing fake images and videos.

"Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site," she wrote in her Instagram post. "It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth."

In August 2024, former President Donald Trump posted AI-generated images on Truth Social falsely suggesting Swift endorsed him, including a manipulated photo depicting Swift as Uncle Sam with text promoting Trump. The incident sparked Swift's fears about the spread of misinformation through AI.

Read 1 remaining paragraphs | Comments

❌
❌