❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 9 November 2024Main stream

AI Tool Reveals Long COVID May Affect 23% of People

9 November 2024 at 16:16
This shows people.A new AI tool identified long COVID in 22.8% of patients, a much higher rate than previously diagnosed. By analyzing extensive health records from nearly 300,000 patients, the algorithm identifies long COVID by distinguishing symptoms linked specifically to SARS-CoV-2 infection rather than pre-existing conditions. This AI approach, known as "precision phenotyping," helps clinicians differentiate long COVID symptoms from other health issues and may improve diagnostic accuracy by about 3%.

Claude AI to process secret government data through new Palantir deal

8 November 2024 at 23:08

Anthropic has announced a partnership with Palantir and Amazon Web Services to bring its Claude AI models to unspecified US intelligence and defense agencies. Claude, a family of AI language models similar to those that power ChatGPT, will work within Palantir's platform using AWS hosting to process and analyze data. But some critics have called out the deal as contradictory to Anthropic's widely-publicized "AI safety" aims.

On X, former Google co-head of AI ethics Timnit Gebru wrote of Anthropic's new deal with Palantir, "Look at how they care so much about 'existential risks to humanity.'"

The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.

Read full article

Comments

Β© Yuichiro Chino via Getty Images

Before yesterdayMain stream

Can Language Models Really Understand? Study Uncovers Limits in AI Logic

7 November 2024 at 16:59
This shows robots.A recent study questions if large language models (LLMs) truly form coherent world models, despite their accurate outputs in complex tasks like generating directions or playing games. Researchers found that while LLMs provide nearly flawless driving directions, they fail with unexpected changes, suggesting the models don't grasp underlying rules.

ChatGPT has a new vanity domain name, and it may have cost $15 million

7 November 2024 at 16:32

On Wednesday, OpenAI CEO Sam Altman merely tweeted "chat.com," announcing that the company had acquired the short domain name, which now points to the company's ChatGPT AI assistant when visited in a web browser. As of Thursday morning, "chatgpt.com" still hosts the chatbot, with the new domain serving as a redirect.

The new domain name comes with an interesting backstory that reveals a multimillion-dollar transaction. HubSpot founder and CTO Dharmesh Shah purchased chat.com for $15.5 million in early 2023, The Verge reports. Shah sold the domain to OpenAI for an undisclosed amount, though he confirmed on X that he "doesn't like profiting off of people he considers friends" and that he received payment in company shares by revealing he is "now an investor in OpenAI."

As The Verge's Kylie Robison points out, Shah originally bought the domain to promote conversational interfaces. "The reason I bought chat.com is simple: I think Chat-based UX (#ChatUX) is the next big thing in software. Communicating with computers/software through a natural language interface is much more intuitive. This is made possible by Generative A.I.," Shah wrote in a LinkedIn post during his brief ownership.

Read full article

Comments

Β© OpenAI / Benj Edwards

Trump plans to dismantle Biden AI safeguards after victory

6 November 2024 at 22:18

Early Wednesday morning, Donald Trump became the presumptive winner of the 2024 US presidential election, setting the stage for dramatic changes to federal AI policy when he takes office early next year. Among them, Trump has stated he plans to dismantle President Biden's AI Executive Order from October 2023 immediately upon taking office.

Biden's order established wide-ranging oversight of AI development. Among its core provisions, the order established the US AI Safety Institute (AISI) and lays out requirements for companies to submit reports about AI training methodologies and security measures, including vulnerability testing data. The order also directed the Commerce Department's National Institute of Standards and Technology (NIST) to develop guidance to help companies identify and fix flaws in their AI models.

Trump supporters in the US government have criticized the measures, as TechCrunch points out. In March, Representative Nancy Mace (R-S.C.) warned that reporting requirements could discourage innovation and prevent developments like ChatGPT. And Senator Ted Cruz (R-Texas) characterized NIST's AI safety standards as an attempt to control speech through "woke" safety requirements.

Read full article

Comments

Β© Anadolu via Getty Images

Anthropic’s Haiku 3.5 surprises experts with an β€œintelligence” price increase

5 November 2024 at 23:50

On Monday, Anthropic launched the latest version of its smallest AI model, Claude 3.5 Haiku, in a way that marks a departure from typical AI model pricing trendsβ€”the new model costs four times more to run than its predecessor. The reason for the price increase is causing some pushback in the AI community: more smarts, according to Anthropic.

"During final testing, Haiku surpassed Claude 3 Opus, our previous flagship model, on many benchmarksβ€”at a fraction of the cost," Anthropic wrote in a post on X. "As a result, we've increased pricing for Claude 3.5 Haiku to reflect its increase in intelligence."

"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere."

Read full article

Comments

Β© Anthropic

Downey Jr. plans to fight AI re-creations from beyond the grave

30 October 2024 at 19:53

Robert Downey Jr. has declared that he will sue any future Hollywood executives who try to re-create his likeness using AI digital replicas, as reported by Variety. His comments came during an appearance on the "On With Kara Swisher" podcast, where he discussed AI's growing role in entertainment.

"I intend to sue all future executives just on spec," Downey told Swisher when discussing the possibility of studios using AI or deepfakes to re-create his performances after his death. When Swisher pointed out he would be deceased at the time, Downey responded that his law firm "will still be very active."

The Oscar winner expressed confidence that Marvel Studios would not use AI to re-create his Tony Stark character, citing his trust in decision-makers there. "I am not worried about them hijacking my character's soul because there's like three or four guys and gals who make all the decisions there anyway and they would never do that to me," he said.

Read full article

Comments

Β© Ilya S. Savenok via Getty Images

Google CEO says over 25% of new Google code is generated by AI

30 October 2024 at 16:50

On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.

"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency," Pichai said during the call. "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."

Google developers aren't the only programmers using AI to assist with coding tasks. It's difficult to get hard numbers, but according to Stack Overflow's 2024 Developer Survey, over 76 percent of all respondents "are using or are planning to use AI tools in their development process this year," with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are "already using AI coding tools both in and outside of work."

Read full article

Comments

Β© Matthias Ritzmann via Getty Images

Hospitals adopt error-prone AI transcription tools despite warnings

28 October 2024 at 19:23

On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a "confabulation" or "hallucination" in the AI field.

Upon its release in 2022, OpenAI claimed that Whisper approached "human level robustness" in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.

The fabrications pose particular risks in health care settings. Despite OpenAI's warnings against using Whisper for "high-risk domains," over 30,000 medical workers now use Whisper-based tools to transcribe patient visits, according to the AP report. The Mankato Clinic in Minnesota and Children's Hospital Los Angeles count among 40 health systems using a Whisper-powered AI copilot service from medical tech company Nabla that is fine-tuned on medical terminology.

Read full article

Comments

Β© Kobus Louw via Getty Images

When Human-AI Teams Thrive and When They Don’t

28 October 2024 at 18:15
This shows computers and people in an office.A new study reveals that while human-AI collaboration can be powerful, it depends on the task. Analysis of hundreds of studies found that AI outperformed human-AI teams in decision-making tasks, while collaborative teams excelled in creative tasks like content generation. This research suggests organizations may overestimate the benefits of human-AI synergy. Instead, strategic use of AI’s strengths in data processing and humans’ creativity may yield the best results.

AI Predicts Chemical Compounds for Dual-Target Medications

24 October 2024 at 15:16
This shows computer code and a molecule.Researchers have developed an AI system that predicts chemical compounds capable of targeting two proteins simultaneously, potentially creating more effective medications. By training the AI with a chemical language model, it was able to generate novel molecular structures with dual-target activity, an essential feature for treating complex diseases like cancer.

At TED AI 2024, experts grapple with AI’s growing pains

24 October 2024 at 00:32

SAN FRANCISCOβ€”On Tuesday, TED AI 2024 kicked off its first day at San Francisco's Herbst Theater with a lineup of speakers that tackled AI's impact on science, art, and society. The two-day event brought a mix of researchers, entrepreneurs, lawyers, and other experts who painted a complex picture of AI with fairly minimal hype.

The second annual conference, organized by Walter and Sam De Brouwer, marked a notable shift from last year's broad existential debates and proclamations of AI as being "the new electricity." Rather than sweeping predictions about, say, looming artificial general intelligence (although there was still some of that, too), speakers mostly focused on immediate challenges: battles over training data rights, proposals for hardware-based regulation, debates about human-AI relationships, and the complex dynamics of workplace adoption.

The day's sessions covered a wide breadth of AI topics: physicist Carlo Rovelli explored consciousness and time, Project CETI researcher Patricia Sharma demonstrated attempts to use AI to decode whale communication, Recording Academy CEO Harvey Mason Jr. outlined music industry adaptation strategies, and even a few robots made appearances.

Read full article

Comments

Β© Benj Edwards

How AI is Reshaping Human Thought and Decision-Making

22 October 2024 at 18:43
This shows a brain and a network of images.A new study introduces "System 0," a cognitive framework where artificial intelligence (AI) enhances human thinking by processing vast data, complementing our natural intuition (System 1) and analytical thinking (System 2). However, this external thinking system poses risks, such as over-reliance on AI and a potential loss of cognitive autonomy.

AI-Written Stories Rated Lower Due to Bias, Not Quality

22 October 2024 at 15:08
This shows robotic hands using a keyboard.New research shows that stories generated by AI, such as ChatGPT, are almost as good as those written by humans. However, when people are told a story is AI-generated, they rate it more negatively, revealing a bias against AI-created content.

AI Uncovers DNA Variants Linked to Psychiatric Disorders

21 October 2024 at 23:05
This shows a head and DNA.Researchers developed an AI algorithm, ARC-SV, to detect complex structural variants in the human genome that previous methods missed. Analyzing over 4,000 genomes, researchers discovered thousands of complex variants, many affecting brain-related genes and linked to schizophrenia and bipolar disorder.

People Empathize with Bullied AI Bots

17 October 2024 at 20:28
This shows a sad little robot.People empathize with AI bots excluded from a virtual game, treating them like social beings in need of fairness. Participants favored giving the AI bot a fair chance in play, with older adults showing a stronger inclination to rectify the perceived unfairness.

Vulnerability Found in AI Image Recognition

15 October 2024 at 16:59
This shows the outline of a computerized head.A new study reveals a vulnerability in AI image recognition systems due to their exclusion of the alpha channel, which controls image transparency. Researchers developed "AlphaDog," an attack method that manipulates transparency in images, allowing hackers to distort visuals like road signs or medical scans in ways undetectable by AI. Tested across 100 AI models, AlphaDog exploits this transparency flaw, posing significant risks to road safety and healthcare diagnostics.

AI-Enhanced MRIs Show Potential for Brain Abnormality Detection

15 October 2024 at 16:37
This shows brain scans.Researchers have developed a machine learning model that upgrades 3T MRI images to mimic the higher-resolution 7T MRI, providing enhanced detail for detecting brain abnormalities. The synthetic 7T images reveal finer features, such as white matter lesions and subcortical microbleeds, which are often difficult to see with standard MRI systems. This AI-driven approach could improve diagnostic accuracy for conditions like traumatic brain injury (TBI) and multiple sclerosis (MS), though clinical validation is needed before wider use.

Integrating Machine Learning Boosts Disease Prediction Accuracy

15 October 2024 at 15:44
This is a drawing of a doctor looking at a computer monitor.A recent review explored how integrating machine learning with traditional statistical models can enhance disease risk prediction accuracy, a key tool in clinical decision-making. While traditional models like logistic regression are limited by certain assumptions, machine learning offers flexibility but has inconsistent results in some cases. The study revealed that combined models, especially stacking methods, outperform individual methods by harnessing each approach’s strengths and addressing their weaknesses.
❌
❌