Normal view
Claude AI to process secret government data through new Palantir deal
Anthropic has announced a partnership with Palantir and Amazon Web Services to bring its Claude AI models to unspecified US intelligence and defense agencies. Claude, a family of AI language models similar to those that power ChatGPT, will work within Palantir's platform using AWS hosting to process and analyze data. But some critics have called out the deal as contradictory to Anthropic's widely-publicized "AI safety" aims.
On X, former Google co-head of AI ethics Timnit Gebru wrote of Anthropic's new deal with Palantir, "Look at how they care so much about 'existential risks to humanity.'"
The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.
Can Language Models Really Understand? Study Uncovers Limits in AI Logic
ChatGPT has a new vanity domain name, and it may have cost $15 million
On Wednesday, OpenAI CEO Sam Altman merely tweeted "chat.com," announcing that the company had acquired the short domain name, which now points to the company's ChatGPT AI assistant when visited in a web browser. As of Thursday morning, "chatgpt.com" still hosts the chatbot, with the new domain serving as a redirect.
The new domain name comes with an interesting backstory that reveals a multimillion-dollar transaction. HubSpot founder and CTO Dharmesh Shah purchased chat.com for $15.5 million in early 2023, The Verge reports. Shah sold the domain to OpenAI for an undisclosed amount, though he confirmed on X that he "doesn't like profiting off of people he considers friends" and that he received payment in company shares by revealing he is "now an investor in OpenAI."
As The Verge's Kylie Robison points out, Shah originally bought the domain to promote conversational interfaces. "The reason I bought chat.com is simple: I think Chat-based UX (#ChatUX) is the next big thing in software. Communicating with computers/software through a natural language interface is much more intuitive. This is made possible by Generative A.I.," Shah wrote in a LinkedIn post during his brief ownership.
Trump plans to dismantle Biden AI safeguards after victory
Early Wednesday morning, Donald Trump became the presumptive winner of the 2024 US presidential election, setting the stage for dramatic changes to federal AI policy when he takes office early next year. Among them, Trump has stated he plans to dismantle President Biden's AI Executive Order from October 2023 immediately upon taking office.
Biden's order established wide-ranging oversight of AI development. Among its core provisions, the order established the US AI Safety Institute (AISI) and lays out requirements for companies to submit reports about AI training methodologies and security measures, including vulnerability testing data. The order also directed the Commerce Department's National Institute of Standards and Technology (NIST) to develop guidance to help companies identify and fix flaws in their AI models.
Trump supporters in the US government have criticized the measures, as TechCrunch points out. In March, Representative Nancy Mace (R-S.C.) warned that reporting requirements could discourage innovation and prevent developments like ChatGPT. And Senator Ted Cruz (R-Texas) characterized NIST's AI safety standards as an attempt to control speech through "woke" safety requirements.
Anthropicβs Haiku 3.5 surprises experts with an βintelligenceβ price increase
On Monday, Anthropic launched the latest version of its smallest AI model, Claude 3.5 Haiku, in a way that marks a departure from typical AI model pricing trendsβthe new model costs four times more to run than its predecessor. The reason for the price increase is causing some pushback in the AI community: more smarts, according to Anthropic.
"During final testing, Haiku surpassed Claude 3 Opus, our previous flagship model, on many benchmarksβat a fraction of the cost," Anthropic wrote in a post on X. "As a result, we've increased pricing for Claude 3.5 Haiku to reflect its increase in intelligence."
"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere."
Downey Jr. plans to fight AI re-creations from beyond the grave
Robert Downey Jr. has declared that he will sue any future Hollywood executives who try to re-create his likeness using AI digital replicas, as reported by Variety. His comments came during an appearance on the "On With Kara Swisher" podcast, where he discussed AI's growing role in entertainment.
"I intend to sue all future executives just on spec," Downey told Swisher when discussing the possibility of studios using AI or deepfakes to re-create his performances after his death. When Swisher pointed out he would be deceased at the time, Downey responded that his law firm "will still be very active."
The Oscar winner expressed confidence that Marvel Studios would not use AI to re-create his Tony Stark character, citing his trust in decision-makers there. "I am not worried about them hijacking my character's soul because there's like three or four guys and gals who make all the decisions there anyway and they would never do that to me," he said.
Google CEO says over 25% of new Google code is generated by AI
On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.
"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency," Pichai said during the call. "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."
Google developers aren't the only programmers using AI to assist with coding tasks. It's difficult to get hard numbers, but according to Stack Overflow's 2024 Developer Survey, over 76 percent of all respondents "are using or are planning to use AI tools in their development process this year," with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are "already using AI coding tools both in and outside of work."
Hospitals adopt error-prone AI transcription tools despite warnings
On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a "confabulation" or "hallucination" in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached "human level robustness" in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
The fabrications pose particular risks in health care settings. Despite OpenAI's warnings against using Whisper for "high-risk domains," over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children's Hospital Los Angeles count among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.
When Human-AI Teams Thrive and When They Donβt
AI Predicts Chemical Compounds for Dual-Target Medications
At TED AI 2024, experts grapple with AIβs growing pains
SAN FRANCISCOβOn Tuesday, TED AI 2024 kicked off its first day at San Francisco's Herbst Theater with a lineup of speakers that tackled AI's impact on science, art, and society. The two-day event brought a mix of researchers, entrepreneurs, lawyers, and other experts who painted a complex picture of AI with fairly minimal hype.
The second annual conference, organized by Walter and Sam De Brouwer, marked a notable shift from last year's broad existential debates and proclamations of AI as being "the new electricity." Rather than sweeping predictions about, say, looming artificial general intelligence (although there was still some of that, too), speakers mostly focused on immediate challenges: battles over training data rights, proposals for hardware-based regulation, debates about human-AI relationships, and the complex dynamics of workplace adoption.
The day's sessions covered a wide breadth of AI topics: physicist Carlo Rovelli explored consciousness and time, Project CETI researcher Patricia Sharma demonstrated attempts to use AI to decode whale communication, Recording Academy CEO Harvey Mason Jr. outlined music industry adaptation strategies, and even a few robots made appearances.