Normal view

There are new articles available, click to refresh the page.
Today — 14 November 2024Main stream

Google launches Gemini app for iOS worldwide

14 November 2024 at 15:37

On Thursday Google launched a dedicated app for its AI-powered assistant, Gemini, on iOS globally. Until now, iOS users either had to use the Google app or mobile web to chat with the AI technology. The new Gemini iOS app supports text-based prompts in 35 languages. Plus, users can have a conversation with Gemini through […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Before yesterdayMain stream

Beyond AI Detection: Rethinking Our Approach to Preserving Academic Integrity

5 November 2024 at 15:00

An expert shares insight and guidance into an area of growing concern. 

GUEST COLUMN | by Jordan Adair

Artificial intelligence (AI) in higher education continues to expand into more aspects of student learning. Initially, some administrators and faculty pointed to possible data privacy or ethical concerns with AI, but the larger focus now is how generative AI, such as ChatGPT and Google Gemini, makes it easier for students to submit work or assessments that lack original content. 

As AI adoption and academic concerns grow, educators may need to rethink how students learn, how student demonstrate understanding of a topic, and how assessments are designed and administered to measure learning and practical application. This may require institutions to throw out the “business-as-usual” approach, especially when it comes to anything involving writing, whether it’s essays or online exams. 

‘As AI adoption and academic concerns grow, educators may need to rethink how students learn, how student demonstrate understanding of a topic, and how assessments are designed and administered to measure learning and practical application.’

As higher education institutions look to maintain academic integrity, staying ahead of how students use AI is critical. Some tools exist to detect and monitor AI use, but are these tools fixing a problem or leaving a void? 

Getting Ahead of the Game

Institutions should familiarize themselves with the potential of large language models in education and open transparent communication channels to discuss AI with stakeholders, including researchers and IT support. This can help set a baseline for potential policies or actions.

Developing a dedicated committee may be beneficial as institutions create and implement new policies and guidelines for using AI tools, develop training and resources for students, faculty, and staff on academic integrity, and encourage the responsible use of AI in education.

Unlike contract cheating, using AI tools isn’t automatically unethical. On the contrary, as AI will permeate society and professions in the near future, there’s a need to discuss the right and wrong ways to leverage AI as part of the academic experience.

Some AI tools, especially chatbots like ChatGPT, present specific academic integrity challenges. While institutions strive to equip students for an AI-driven future, they also need to ensure that AI doesn’t compromise the integrity of the educational experience. 

Study Results Paint a Grim Picture

As AI evolves and is adopted more broadly, colleges and universities are exploring how to implement better detection methods effectively. While some existing detection tools show promise, they all struggle to identify AI-generated writing accurately.

AI and plagiarism detection are similar but different. Both aim to detect unoriginal content, but their focus is different. AI detection looks for writing patterns, like word choice and sentence structure, to identify AI-generated text. Plagiarism detection compares text against huge databases to identify copied or paraphrased content from other sources.

Looking at a growing level of research, there are strong concerns about these tools’ inabilities to detect AI. One study tested the largest commercial plagiarism and AI detection tool against ChatGPT-generated text. It was found that when text is unaltered, the detection tool effectively detects it as AI-generated. However, when Quillbot paraphrased it, the score dropped to 31% and 0% after two rephrases. Another 2024 experiment of the same AI detection software showed the same results: it can accurately detect unaltered AI content but struggles when tools like Quillbot make changes. Unfortunately, this experiment also highlighted how AI detection is completely unable—with 0% success—to detect AI content that has been altered by AI designed to humanize AI-generated text. 

In another instance, a recent International Journal for Educational Integrity study tested 14 AI detection tools—12 publicly available and two commercial—against ChatGPT:

  • AI detection tools are inaccurate: they often mistakenly identify AI-generated text as human-written and struggle to detect AI content translated from other languages.
  • Manually editing responses reduces the accuracy of detection tools: swapping words, reordering sentences, and paraphrasing decreased the accuracy of the detection tools.

 

Finally, a 2023 study titled “Will ChatGPT Get You Caught? Rethinking of Plagiarism Detection” fed 50 ChatGPT-generated essays into two text-matching software systems from the largest and most well-known plagiarism tool. The results of the submitted essays “demonstrated a remarkable level of originality stirring up alarms of the reliability of plagiarism check software used by academia.”

AI chatbots are improving at writing, and more effective prompts help them generate more human-like content. In the examples above, AI detection tools from the biggest companies to the free options were tested against various content types, including long-form essays and short-form assignments across different subjects and domains. No matter the size or content type, they all struggled to detect AI. While AI detection tools can help as a high-level gut check, they’re still mostly ineffective, as shown by the many studies.

Up the Ante Against Cheating

Given the ineffectiveness of AI detection tools, academic institutions must consider alternative methods to curb AI usage and protect integrity.

One option is to consider a modified approach to written assignments and essays. Instead of traditional written assessments, try scaffolded assignments that require input on one subject over a series of tests. You can also ask students to share their opinions on specific class discussions or request that they cite examples from class. 

Another option is instructing students to review an article or a case study. Then, ask them to reply to specific questions that require them to think critically and integrate their opinions and reasoning. Doing this makes it challenging to use AI content tools because they do not have enough context to formulate a usable response.

Institutions can also proctor written assignments like an online exam. This helps to block
AI usage and removes access or help from phones. Proctoring can be very flexible, allowing access to specific approved sites, such as case studies, research articles, etc., while blocking everything else.

Protecting Academic Integrity

If proctoring is being used, consider a hybrid proctoring solution that combines AI, human review, and a secure browser rather than just one of those methods. Hybrid proctoring uses
AI to monitor each test taker and alert a live proctor if potential misconduct is detected. Once alerted, the proctor reviews the situation and only intervenes if misconduct is suspected. Otherwise, the test taker isn’t interrupted. This smarter proctoring approach delivers a much less intimidating and noninvasive testing experience than human-only platforms.

Preserving the integrity of exams and protecting the reputation of faculty and institutions is incredibly important to continue attracting high-potential students. AI tools are here to stay; schools don’t need to stay ahead of them. Instead, understand how students use AI, modify how learning is delivered, use AI to your benefit when possible, and create clear and consistent policies so students understand how and where they can ethically leverage the latest in AI.  

Jordan Adair is VP of Product at Honorlock. Jordan began his career in education as an elementary and middle school teacher. After transitioning into educational technology, he became focused on delivering products designed to empower instructors and improve the student experience. Connect with Jordan on LinkedIn. 

The post Beyond AI Detection: Rethinking Our Approach to Preserving Academic Integrity appeared first on EdTech Digest.

Google CEO says over 25% of new Google code is generated by AI

30 October 2024 at 16:50

On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.

"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency," Pichai said during the call. "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."

Google developers aren't the only programmers using AI to assist with coding tasks. It's difficult to get hard numbers, but according to Stack Overflow's 2024 Developer Survey, over 76 percent of all respondents "are using or are planning to use AI tools in their development process this year," with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are "already using AI coding tools both in and outside of work."

Read full article

Comments

© Matthias Ritzmann via Getty Images

❌
❌