Reading view

There are new articles available, click to refresh the page.

OpenAI loses another lead safety researcher, Lilian Weng

Another one of OpenAI’s lead safety researchers, Lilian Weng, announced on Friday she is departing the startup. Weng served as VP of research and safety since August, and before that, was the head of the OpenAI’s safety systems team. In a post on X, Weng said that “after 7 years at OpenAI, I feel ready […]

© 2024 TechCrunch. All rights reserved. For personal use only.

ChatGPT told 2M people to get their election news elsewhere — and rejected 250K deepfakes

Now that the election is over, the dissection can begin. As this is the first election in which AI chatbots played a significant part of voters’ information diets, even approximate numbers are interesting to think about. For instance, OpenAI has stated that it told around 2 million users of ChatGPT to go look somewhere else. […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Claude AI to process secret government data through new Palantir deal

Anthropic has announced a partnership with Palantir and Amazon Web Services to bring its Claude AI models to unspecified US intelligence and defense agencies. Claude, a family of AI language models similar to those that power ChatGPT, will work within Palantir's platform using AWS hosting to process and analyze data. But some critics have called out the deal as contradictory to Anthropic's widely-publicized "AI safety" aims.

On X, former Google co-head of AI ethics Timnit Gebru wrote of Anthropic's new deal with Palantir, "Look at how they care so much about 'existential risks to humanity.'"

The partnership makes Claude available within Palantir's Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the "secret" classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.

Read full article

Comments

© Yuichiro Chino via Getty Images

ChatGPT has a new vanity domain name, and it may have cost $15 million

On Wednesday, OpenAI CEO Sam Altman merely tweeted "chat.com," announcing that the company had acquired the short domain name, which now points to the company's ChatGPT AI assistant when visited in a web browser. As of Thursday morning, "chatgpt.com" still hosts the chatbot, with the new domain serving as a redirect.

The new domain name comes with an interesting backstory that reveals a multimillion-dollar transaction. HubSpot founder and CTO Dharmesh Shah purchased chat.com for $15.5 million in early 2023, The Verge reports. Shah sold the domain to OpenAI for an undisclosed amount, though he confirmed on X that he "doesn't like profiting off of people he considers friends" and that he received payment in company shares by revealing he is "now an investor in OpenAI."

As The Verge's Kylie Robison points out, Shah originally bought the domain to promote conversational interfaces. "The reason I bought chat.com is simple: I think Chat-based UX (#ChatUX) is the next big thing in software. Communicating with computers/software through a natural language interface is much more intuitive. This is made possible by Generative A.I.," Shah wrote in a LinkedIn post during his brief ownership.

Read full article

Comments

© OpenAI / Benj Edwards

Apple iOS 18.2 public beta arrives with new AI features, but some remain waitlisted

Apple has released the AI-powered version of its latest mobile operating system, iOS 18.2, to its public beta users. The update includes new features like an AI emoji generator app called Genmoji, an Image Playground AI image app, ChatGPT integration with Siri, and visual search using the iPhone 16 cameras, among other things. Previously, these […]

© 2024 TechCrunch. All rights reserved. For personal use only.

The other election night winner: Perplexity

On Tuesday, two AI startups tried convincing the world their AI chatbots were good enough to be an accurate, real-time source of information during a high-stakes presidential election: xAI and Perplexity. Elon Musk’s Grok failed almost instantly, offering wrong answers about races’ outcomes before the polls had even closed. On the other hand, Perplexity offered […]

© 2024 TechCrunch. All rights reserved. For personal use only.

OpenAI acquired Chat.com

OpenAI bought Chat.com, adding to its collection of high-profile domain names. As of this morning, Chat.com now redirects to OpenAI’s AI-powered chatbot, ChatGPT. An OpenAI spokesperson confirmed the acquisition via email. Chat.com is one of the older domains on the web, having been registered in September 1996. Last year, it was reported that HubSpot co-founder […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Anthropic’s Haiku 3.5 surprises experts with an “intelligence” price increase

On Monday, Anthropic launched the latest version of its smallest AI model, Claude 3.5 Haiku, in a way that marks a departure from typical AI model pricing trends—the new model costs four times more to run than its predecessor. The reason for the price increase is causing some pushback in the AI community: more smarts, according to Anthropic.

"During final testing, Haiku surpassed Claude 3 Opus, our previous flagship model, on many benchmarks—at a fraction of the cost," Anthropic wrote in a post on X. "As a result, we've increased pricing for Claude 3.5 Haiku to reflect its increase in intelligence."

"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere."

Read full article

Comments

© Anthropic

Beyond AI Detection: Rethinking Our Approach to Preserving Academic Integrity

An expert shares insight and guidance into an area of growing concern. 

GUEST COLUMN | by Jordan Adair

Artificial intelligence (AI) in higher education continues to expand into more aspects of student learning. Initially, some administrators and faculty pointed to possible data privacy or ethical concerns with AI, but the larger focus now is how generative AI, such as ChatGPT and Google Gemini, makes it easier for students to submit work or assessments that lack original content. 

As AI adoption and academic concerns grow, educators may need to rethink how students learn, how student demonstrate understanding of a topic, and how assessments are designed and administered to measure learning and practical application. This may require institutions to throw out the “business-as-usual” approach, especially when it comes to anything involving writing, whether it’s essays or online exams. 

‘As AI adoption and academic concerns grow, educators may need to rethink how students learn, how student demonstrate understanding of a topic, and how assessments are designed and administered to measure learning and practical application.’

As higher education institutions look to maintain academic integrity, staying ahead of how students use AI is critical. Some tools exist to detect and monitor AI use, but are these tools fixing a problem or leaving a void? 

Getting Ahead of the Game

Institutions should familiarize themselves with the potential of large language models in education and open transparent communication channels to discuss AI with stakeholders, including researchers and IT support. This can help set a baseline for potential policies or actions.

Developing a dedicated committee may be beneficial as institutions create and implement new policies and guidelines for using AI tools, develop training and resources for students, faculty, and staff on academic integrity, and encourage the responsible use of AI in education.

Unlike contract cheating, using AI tools isn’t automatically unethical. On the contrary, as AI will permeate society and professions in the near future, there’s a need to discuss the right and wrong ways to leverage AI as part of the academic experience.

Some AI tools, especially chatbots like ChatGPT, present specific academic integrity challenges. While institutions strive to equip students for an AI-driven future, they also need to ensure that AI doesn’t compromise the integrity of the educational experience. 

Study Results Paint a Grim Picture

As AI evolves and is adopted more broadly, colleges and universities are exploring how to implement better detection methods effectively. While some existing detection tools show promise, they all struggle to identify AI-generated writing accurately.

AI and plagiarism detection are similar but different. Both aim to detect unoriginal content, but their focus is different. AI detection looks for writing patterns, like word choice and sentence structure, to identify AI-generated text. Plagiarism detection compares text against huge databases to identify copied or paraphrased content from other sources.

Looking at a growing level of research, there are strong concerns about these tools’ inabilities to detect AI. One study tested the largest commercial plagiarism and AI detection tool against ChatGPT-generated text. It was found that when text is unaltered, the detection tool effectively detects it as AI-generated. However, when Quillbot paraphrased it, the score dropped to 31% and 0% after two rephrases. Another 2024 experiment of the same AI detection software showed the same results: it can accurately detect unaltered AI content but struggles when tools like Quillbot make changes. Unfortunately, this experiment also highlighted how AI detection is completely unable—with 0% success—to detect AI content that has been altered by AI designed to humanize AI-generated text. 

In another instance, a recent International Journal for Educational Integrity study tested 14 AI detection tools—12 publicly available and two commercial—against ChatGPT:

  • AI detection tools are inaccurate: they often mistakenly identify AI-generated text as human-written and struggle to detect AI content translated from other languages.
  • Manually editing responses reduces the accuracy of detection tools: swapping words, reordering sentences, and paraphrasing decreased the accuracy of the detection tools.

 

Finally, a 2023 study titled “Will ChatGPT Get You Caught? Rethinking of Plagiarism Detection” fed 50 ChatGPT-generated essays into two text-matching software systems from the largest and most well-known plagiarism tool. The results of the submitted essays “demonstrated a remarkable level of originality stirring up alarms of the reliability of plagiarism check software used by academia.”

AI chatbots are improving at writing, and more effective prompts help them generate more human-like content. In the examples above, AI detection tools from the biggest companies to the free options were tested against various content types, including long-form essays and short-form assignments across different subjects and domains. No matter the size or content type, they all struggled to detect AI. While AI detection tools can help as a high-level gut check, they’re still mostly ineffective, as shown by the many studies.

Up the Ante Against Cheating

Given the ineffectiveness of AI detection tools, academic institutions must consider alternative methods to curb AI usage and protect integrity.

One option is to consider a modified approach to written assignments and essays. Instead of traditional written assessments, try scaffolded assignments that require input on one subject over a series of tests. You can also ask students to share their opinions on specific class discussions or request that they cite examples from class. 

Another option is instructing students to review an article or a case study. Then, ask them to reply to specific questions that require them to think critically and integrate their opinions and reasoning. Doing this makes it challenging to use AI content tools because they do not have enough context to formulate a usable response.

Institutions can also proctor written assignments like an online exam. This helps to block
AI usage and removes access or help from phones. Proctoring can be very flexible, allowing access to specific approved sites, such as case studies, research articles, etc., while blocking everything else.

Protecting Academic Integrity

If proctoring is being used, consider a hybrid proctoring solution that combines AI, human review, and a secure browser rather than just one of those methods. Hybrid proctoring uses
AI to monitor each test taker and alert a live proctor if potential misconduct is detected. Once alerted, the proctor reviews the situation and only intervenes if misconduct is suspected. Otherwise, the test taker isn’t interrupted. This smarter proctoring approach delivers a much less intimidating and noninvasive testing experience than human-only platforms.

Preserving the integrity of exams and protecting the reputation of faculty and institutions is incredibly important to continue attracting high-potential students. AI tools are here to stay; schools don’t need to stay ahead of them. Instead, understand how students use AI, modify how learning is delivered, use AI to your benefit when possible, and create clear and consistent policies so students understand how and where they can ethically leverage the latest in AI.  

Jordan Adair is VP of Product at Honorlock. Jordan began his career in education as an elementary and middle school teacher. After transitioning into educational technology, he became focused on delivering products designed to empower instructors and improve the student experience. Connect with Jordan on LinkedIn. 

The post Beyond AI Detection: Rethinking Our Approach to Preserving Academic Integrity appeared first on EdTech Digest.

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies. That growth has propelled OpenAI itself into […]

© 2024 TechCrunch. All rights reserved. For personal use only.

iOS 18.2 developer beta adds ChatGPT and image-generation features

Today, Apple released the first developer beta of iOS 18.2 for supported devices. This beta release marks the first time several key AI features that Apple teased at its developer conference this June are available.

Apple is marketing a wide range of generative AI features under the banner "Apple Intelligence." Initially, Apple Intelligence was planned to release as part of iOS 18, but some features slipped to iOS 18.1, others to iOS 18.2, and a few still to future undisclosed software updates.

iOS 18.1 has been in beta for a while and includes improvements to Siri, generative writing tools that help with rewriting or proofreading, smart replies for Messages, and notification summaries. That update is expected to reach the public next week.

Read full article

Comments

© Apple

AI Admissions Essays Align with Privileged Male Writing Patterns

This shows a robot writing an essay.Researchers analyzed AI-generated and human-written college admissions essays, finding that AI-generated essays resemble those written by male students from privileged backgrounds. AI essays tended to use longer words and exhibited less variety in writing style than human essays, particularly resembling essays from private school applicants. The study highlights concerns about the use of AI in crafting admissions essays, as AI may dilute a student’s authentic voice. Students are encouraged to use AI as a tool to enhance, not replace, their personal narrative in writing.

Scarlett Johansson vs. OpenAI: gebruikte maker ChatGPT zonder toestemming haar stem?

Scarlett Johansson. Volgens haar gebruikte OpenAI zonder toestemming haar stem.

Misschien heb je hem ooit wel gezien: de film Her, over een AI die een relatie krijgt met een man. In de toekomst kun je ook in het echt met de stem van de AI praten. Tenminste, dat is wat OpenAI bedacht had.Vorig jaar introduceerde het bedrijf, dat ChatGPT maakte, een spraakassistent met een stem die geïnspireerd lijkt op die van Scarlett Johansson, die in de film achter de stem zat. Maar wat blijkt: Scarlett Johansson heeft daar helemaal geen toestemming voor gegeven.

De stem Sky bestaat al sinds vorig jaar, maar kreeg bij een demo vorige week meer aandacht. Daarbij werd getoond hoe de spraakassistent een verhaaltje voor het slapengaan vertelde. De AI begint met een vrouwelijke stem, die verdacht veel lijkt op de stem van de AI in Her, die dus door Johansson werd ingesproken. Die gedachte werd kracht bijgezet door een bericht van OpenAI-ceo Sam Altmen: op X plaatste hij het woord ‘her’. En eerder vertelde hij als dat Her zijn favoriete film is.

 

View this post on Instagram

 

A post shared by The Verge (@verge)

Hoewel OpenAI al vrij snel benadrukte dat de spraakassistent niet ontworpen is om als Johansson te klinken, ontstond in de afgelopen dagen toch ophef. Op 20 mei werd bekend dat de stem tijdelijk offline wordt gehaald, omdat er veel vragen werden gesteld over Sky, aldus The Verge. Daarbij benadrukte het bedrijf dat Sky niet op Johansson moet lijken. “We vinden dat AI-stemmen niet expres de unieke stem van een beroemdheid moet nadoen”, zegt het bedrijf. “Sky’s stem is geen imitatie van Scarlett Johansson, maar is van een actrice die haar eigen stem gebruikt.”

Scarlett Johansson boos op OpenAI

Inmiddels blijkt echter dat er wat meer achter het verhaal zit. Johansson zelf heeft ondertussen gereageerd met een verklaring en stelt dat OpenAI haar benaderd had om de stem in te spreken. Dat verzoek wees ze af, waarop dus alsnog een stem verscheen die precies als Johansson zelf klinkt. Tegenover NPR zegt ze dat OpenAI haar twee dagen voor de demo verscheen opnieuw benaderde, met de vraag of ze het verzoek wilde heroverwegen. Nog voor er überhaupt een gesprek had plaatsgevonden, was de demo al verschenen en merkte Johansson dus dat Sky op haar lijkt.

“Ik was geshockeerd, woedend en kon niet geloven dat Altman een stem zou maken die zo vergelijkbaar is met de mijne dat mijn naaste vrienden en nieuwsmedia het verschil niet konden horen”, aldus Johansson. Zeker nu er zoveel misinformatie op het internet rondgaat, baart dat haar zorgen. Ondertussen hebben haar advocaten twee brieven naar OpenAI gestuurd, waarin ze vragen om een gedetailleerde beschrijving van het ontwikkelproces van Sky.

Sam Altman ontkent de aantijgingen ondertussen. Hij stelt dat de stemacteur achter Sky al gecast was voor er überhaupt ooit contact was gelegd met Johansson. “Uit respect voor Johansson hebben we het gebruik van Sky’s stem in onze producten gepauzeerd. Het spijt ons dat we niet beter gecommuniceerd hebben”, aldus de ceo.

OpenAI vs. auteursrechten

Het is niet de eerste keer dat OpenAI in opspraak komt na boze eigenaren van rechten. Eerder werden al rechtszaken aangespannen – waaronder door The New York Times – waarin het bedrijf samen met Microsoft beschuldigd wordt van het schenden van auteursrechten. Het AI-bedrijf zou zonder toestemming artikelen van de krant hebben gebruikt om zijn AI-systemen te trainen. Maar op die artikelen rust auteursrecht, dus dat mag niet zomaar. Diverse schrijvers beklaagden zich over hetzelfde probleem: ook hun werk zou zonder toestemming door het bedrijf gebruikt zijn om de systemen te trainen.

Foto: Shutterstock

Lees Scarlett Johansson vs. OpenAI: gebruikte maker ChatGPT zonder toestemming haar stem? verder op Numrush

❌