Reading view

There are new articles available, click to refresh the page.

Beyond AI Detection: Rethinking Our Approach to Preserving Academic Integrity

An expert shares insight and guidance into an area of growing concern. 

GUEST COLUMN | by Jordan Adair

Artificial intelligence (AI) in higher education continues to expand into more aspects of student learning. Initially, some administrators and faculty pointed to possible data privacy or ethical concerns with AI, but the larger focus now is how generative AI, such as ChatGPT and Google Gemini, makes it easier for students to submit work or assessments that lack original content. 

As AI adoption and academic concerns grow, educators may need to rethink how students learn, how student demonstrate understanding of a topic, and how assessments are designed and administered to measure learning and practical application. This may require institutions to throw out the “business-as-usual” approach, especially when it comes to anything involving writing, whether it’s essays or online exams. 

‘As AI adoption and academic concerns grow, educators may need to rethink how students learn, how student demonstrate understanding of a topic, and how assessments are designed and administered to measure learning and practical application.’

As higher education institutions look to maintain academic integrity, staying ahead of how students use AI is critical. Some tools exist to detect and monitor AI use, but are these tools fixing a problem or leaving a void? 

Getting Ahead of the Game

Institutions should familiarize themselves with the potential of large language models in education and open transparent communication channels to discuss AI with stakeholders, including researchers and IT support. This can help set a baseline for potential policies or actions.

Developing a dedicated committee may be beneficial as institutions create and implement new policies and guidelines for using AI tools, develop training and resources for students, faculty, and staff on academic integrity, and encourage the responsible use of AI in education.

Unlike contract cheating, using AI tools isn’t automatically unethical. On the contrary, as AI will permeate society and professions in the near future, there’s a need to discuss the right and wrong ways to leverage AI as part of the academic experience.

Some AI tools, especially chatbots like ChatGPT, present specific academic integrity challenges. While institutions strive to equip students for an AI-driven future, they also need to ensure that AI doesn’t compromise the integrity of the educational experience. 

Study Results Paint a Grim Picture

As AI evolves and is adopted more broadly, colleges and universities are exploring how to implement better detection methods effectively. While some existing detection tools show promise, they all struggle to identify AI-generated writing accurately.

AI and plagiarism detection are similar but different. Both aim to detect unoriginal content, but their focus is different. AI detection looks for writing patterns, like word choice and sentence structure, to identify AI-generated text. Plagiarism detection compares text against huge databases to identify copied or paraphrased content from other sources.

Looking at a growing level of research, there are strong concerns about these tools’ inabilities to detect AI. One study tested the largest commercial plagiarism and AI detection tool against ChatGPT-generated text. It was found that when text is unaltered, the detection tool effectively detects it as AI-generated. However, when Quillbot paraphrased it, the score dropped to 31% and 0% after two rephrases. Another 2024 experiment of the same AI detection software showed the same results: it can accurately detect unaltered AI content but struggles when tools like Quillbot make changes. Unfortunately, this experiment also highlighted how AI detection is completely unable—with 0% success—to detect AI content that has been altered by AI designed to humanize AI-generated text. 

In another instance, a recent International Journal for Educational Integrity study tested 14 AI detection tools—12 publicly available and two commercial—against ChatGPT:

  • AI detection tools are inaccurate: they often mistakenly identify AI-generated text as human-written and struggle to detect AI content translated from other languages.
  • Manually editing responses reduces the accuracy of detection tools: swapping words, reordering sentences, and paraphrasing decreased the accuracy of the detection tools.

 

Finally, a 2023 study titled “Will ChatGPT Get You Caught? Rethinking of Plagiarism Detection” fed 50 ChatGPT-generated essays into two text-matching software systems from the largest and most well-known plagiarism tool. The results of the submitted essays “demonstrated a remarkable level of originality stirring up alarms of the reliability of plagiarism check software used by academia.”

AI chatbots are improving at writing, and more effective prompts help them generate more human-like content. In the examples above, AI detection tools from the biggest companies to the free options were tested against various content types, including long-form essays and short-form assignments across different subjects and domains. No matter the size or content type, they all struggled to detect AI. While AI detection tools can help as a high-level gut check, they’re still mostly ineffective, as shown by the many studies.

Up the Ante Against Cheating

Given the ineffectiveness of AI detection tools, academic institutions must consider alternative methods to curb AI usage and protect integrity.

One option is to consider a modified approach to written assignments and essays. Instead of traditional written assessments, try scaffolded assignments that require input on one subject over a series of tests. You can also ask students to share their opinions on specific class discussions or request that they cite examples from class. 

Another option is instructing students to review an article or a case study. Then, ask them to reply to specific questions that require them to think critically and integrate their opinions and reasoning. Doing this makes it challenging to use AI content tools because they do not have enough context to formulate a usable response.

Institutions can also proctor written assignments like an online exam. This helps to block
AI usage and removes access or help from phones. Proctoring can be very flexible, allowing access to specific approved sites, such as case studies, research articles, etc., while blocking everything else.

Protecting Academic Integrity

If proctoring is being used, consider a hybrid proctoring solution that combines AI, human review, and a secure browser rather than just one of those methods. Hybrid proctoring uses
AI to monitor each test taker and alert a live proctor if potential misconduct is detected. Once alerted, the proctor reviews the situation and only intervenes if misconduct is suspected. Otherwise, the test taker isn’t interrupted. This smarter proctoring approach delivers a much less intimidating and noninvasive testing experience than human-only platforms.

Preserving the integrity of exams and protecting the reputation of faculty and institutions is incredibly important to continue attracting high-potential students. AI tools are here to stay; schools don’t need to stay ahead of them. Instead, understand how students use AI, modify how learning is delivered, use AI to your benefit when possible, and create clear and consistent policies so students understand how and where they can ethically leverage the latest in AI.  

Jordan Adair is VP of Product at Honorlock. Jordan began his career in education as an elementary and middle school teacher. After transitioning into educational technology, he became focused on delivering products designed to empower instructors and improve the student experience. Connect with Jordan on LinkedIn. 

The post Beyond AI Detection: Rethinking Our Approach to Preserving Academic Integrity appeared first on EdTech Digest.

Google Maps is getting new AI features powered by Gemini

Google Maps is getting new features powered by Gemini, Google’s generative AI model. On Thursday the company announced incoming updates that will allow Google Maps users in the U.S. to tap into AI to help them find new places to visit and answer questions about different locations. The platform is also getting enhanced navigation features […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Google’s Gemini API and AI Studio get grounding with Google Search

Starting today, developers using Google’s Gemini API and its Google AI Studio to build AI-based services and bots will be able to ground their prompts’ results with data from Google Search. This should enable more accurate responses based on fresher data. As has been the case before, developers will be able to try out grounding […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Generative AI is coming to Google Maps, Google Earth, Waze

Google revealed today how it plans to use generative AI to enhance its mapping activities. It's the latest application of Gemini, the company's in-house rival to GPT-4, which the company wants to use to improve the experience when searching for something. Google Maps, Google Earth, and Waze will all get feature upgrades thanks to Gemini, although in some cases only with Google's "trusted testers" at first.

Google Maps

More than 2 billion people use Google Maps every month, according to the company, and in fact, AI is nothing new to Google Maps. "A lot of those features that we've introduced over the years have been thanks to AI," said Chris Phillips,VP and general manager of Geo at Google. "Think of features like Lens and maps. When you're on a street corner, you can lift up your phone and look, and through your camera view, you can actually see we laid places on top of your view. So you can see a business. Is it open? What are the ratings for it? Is it busy? You can even see businesses that are out of your line of sight," he explained.

At some point this week, if you use the Android or iOS Google Maps app here in the US, you should start seeing more detailed and contextual search results. Maps will now respond to conversational requests—during a demo, Google asked it what to do on a night out with friends in Boston, with the app returning a set of results curated by Gemini. These included categories of places—speakeasies, for example—with review summaries and answers from users.

Read full article

Comments

© Google

Google CEO says over 25% of new Google code is generated by AI

On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.

"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency," Pichai said during the call. "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."

Google developers aren't the only programmers using AI to assist with coding tasks. It's difficult to get hard numbers, but according to Stack Overflow's 2024 Developer Survey, over 76 percent of all respondents "are using or are planning to use AI tools in their development process this year," with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are "already using AI coding tools both in and outside of work."

Read full article

Comments

© Matthias Ritzmann via Getty Images

AI Literacy: Getting Started

The speed of recent innovation is head spinning. Here’s some help. 

GUEST COLUMN | by Delia DeCourcy

“As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock AI’s full innovative potential.” 

Ethan Mollick’s insight from his recent book Co-Intelligence: Living and Working with AI, is a great argument for why AI literacy is crucial for our students and faculty right now. To understand AI, you have to use it – a lot – not only so you know how AI can assist you, but also, as Mollick explains, so you know how AI will impact you and your current job–or in the case of students, the job they’ll eventually have. 

What is AI Literacy?

Definitions of AI literacy abound but most have a few characteristics in common:

 

Deeper dimensions of that second bullet could include knowing the difference between AI and generative AI; understanding the biases and ethical implications of large language model training; and mastering prompting strategies to name a few.

AI Literacy and Future Readiness

If the two-year generative AI tidal wave originating with ChatGPT going live isn’t enough to stoke your belief in the need for AI literacy, consider these facts and statistics:

  • Studies from the National Artificial Intelligence Advisory Committee (NAIAC) in 2023 show that 80% of the US workforce do some tasks that will be affected by large language models, and 20% of jobs will see about half their daily tasks affected by AI. 
  • A poll conducted by Impact Research for the Walton Family Foundation revealed that as of June 2024, about half of K-12 students and teachers said they use ChatGPT at least weekly. 
  • According to a June report from Pearson, 56% of higher education students said that generative AI tools made them more efficient in the spring semester, while only 14% of faculty were confident about using AI in their teaching. 
  • AI is already integrated into many of the devices and platforms we use every day. That’s now true in education as well with the integration of the Gemini chatbot in Google Workspace for Education and Microsoft’s offering of Copilot to education users.

Supporting institutions, educators, and students with AI literacy

Institutions – Assess, Plan, Implement

Assessing institutional readiness for generative AI integration, planning, and implementation means looking not only at curriculum integration and professional development for educators, but also how this technology can be used to personalize the student experience, streamline administration, and improve operating costs – not to mention the critical step of developing institutional policies for responsible and ethical AI use. This complex planning process assumes a certain level of AI literacy for the stakeholders contributing to the planning. So some foundational learning might be in order prior to the “assess” stage.

‘This complex planning process assumes a certain level of AI literacy for the stakeholders contributing to the planning. So some foundational learning might be in order prior to the “assess” stage.’

Fortunately for K-12 leaders, The Council of the Great City Schools and CoSN have developed a Gen AI Readiness Checklist, which helps districts think through implementation necessities from executive leadership to security and risk management to ensure a roll out aligns with existing instructional and operational objectives. It’s also helpful to look at model districts like Gwinnett County Schools in Georgia that have been integrating AI into their curriculum since before ChatGPT’s launch.

Similarly, in higher education, Educause provides a framework for AI governance, operations, and pedagogy and has also published the 2024 Educause AI Landscape Study that helps colleges and universities better understand the promise and pitfalls of AI implementation. For an example of what AI assessment and planning looks like at a leading institution, see The Report of the Yale Task Force on Artificial Intelligence published in June of this year. The document explains how AI is already in use across campus, provides a vision for moving forward, and suggests actions to take.

Educators – Support Innovation through Collaboration

Whether teaching or administrating, in university or K12, educators need to upskill and develop a generative AI toolbox. The more we use the technology, the better we will understand its power and potential. Fortunately, both Google Gemini and Microsoft Copilot have virtual PD courses that educators can use to get started. From there, it’s all about integrating these productivity platforms into our day to day work to “understand the nuances, limitations, and abilities” of the tools. And for self-paced AI literacy learning, Common Sense Education’s AI Foundations for Educators course introduces the basics of AI and ethical considerations for integrating this technology into teaching.

The best learning is inherently social, so working with a team or department to share discoveries about how generative AI can help with personalizing learning materials, lesson plan development, formative assessment, and daily productivity is ideal. For more formalized implementation of this new technology, consider regular coaching and modeling for new adopters. At Hillsborough Township Public Schools in New Jersey, the district has identified a pilot group of intermediate and middle school teachers, technology coaches, and administrators who are exploring how Google Gemini can help with teaching and learning this year. With an initial pre-school year PD workshop followed by regular touch points, coaching, and modeling, the pilot will provide the district a view of if and how they want to scale generative AI with faculty across all schools.

‘The best learning is inherently social, so working with a team or department to share discoveries about how generative AI can help with personalizing learning materials, lesson plan development, formative assessment, and daily productivity is ideal.’

In higher education, many institutions are providing specific guidance to faculty about how generative AI should and should not be used in the classroom as well as how to address it in their syllabi with regard to academic integrity and acceptable use. At the University of North Carolina at Chapel Hill, faculty are engaging in communities of practice that examine how generative AI is being used in their discipline and the instructional issues surrounding gen AI’s use, as well as re-designing curriculum to integrate this new technology. These critical AI literacy efforts are led by the Center for Faculty Excellence and funded by Lenovo’s Instructional Innovation Grants program at UNC. This early work on generative AI integration will support future scaling across campus. 

Students – Integrate AI Literacy into the Curriculum

The time to initiate student AI literacy is now. Generative AI platforms are plentiful and students are using them. In the work world, this powerful technology is being embraced across industries. We want students to be knowledgeable, skilled, and prepared. They need to understand not only how to use AI responsibly, but also how it works and how it can be harmful. 

‘We want students to be knowledgeable, skilled, and prepared. They need to understand not only how to use AI responsibly, but also how it works and how it can be harmful.’

The AI literacy students need will vary based on age. Fortunately, expert organizations like ISTE have already made recommendations about the vocabulary and concepts K12 educators can integrate at which grades to help students understand and use AI responsibly. AI literacy must be integrated across the curriculum in ways that are relevant for each discipline. But this is one more thing to add to educators’ already full plates as they themselves develop their own AI literacy. Fortunately, MIT, Stanford, and Common Sense Education have developed AI literacy materials that can be integrated into existing curriculum. And Microsoft has an AI classroom toolkit that includes materials on teaching prompting. 

The speed of recent innovation is head spinning. Remaining technologically literate in the face of that innovation is no small task. It will be critical for educators and institutions to assess and implement AI in ways that matter, ensuring it is helping them achieve their goals. Just as importantly, educators and institutions play an essential role in activating students’ AI literacy as they take the necessary steps into this new technology landscape and ultimately embark on their first professional jobs outside of school. 

Delia DeCourcy is a Senior Strategist for the Lenovo Worldwide Education Portfolio. Prior to joining Lenovo she had a 25-year career in education as a teacher, consultant, and administrator, most recently as the Executive Director of Digital Teaching and Learning for a district in North Carolina. Previously, she was a literacy consultant serving 28 school districts in Michigan focusing on best practices in reading and writing instruction. Delia has also been a writing instructor at the University of Michigan where she was awarded the Moscow Prize for Excellence in Teaching Composition. In addition, she served as a middle and high school English teacher, assistant principal, and non-profit director. She is the co-author of the curriculum text Teaching Romeo & Juliet: A Differentiated Approach published by the National Council for the Teachers of English. Connect with Delia on LinkedIn

The post AI Literacy: Getting Started appeared first on EdTech Digest.

Lawsuit: Chatbot that allegedly caused teen’s suicide is now more dangerous for kids

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI's hyper-realistic chatbots—with a limited version available for free or a "supercharged" version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to directly spur Sewell to develop suicidal thoughts. Within a year, Setzer "died by a self-inflicted gunshot wound to the head," a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Read full article

Comments

© via Center for Humane Technology

Google brengt meer Gemini AI-functies naar Pixel 8, 8 Pro en 8a

Pixel 8

Google heeft een reeks nieuwe features aangekondigd voor zijn Pixel 8 en Pixel 8a, de Pixel Watch 2 en de Pixel-tablets. Het gaat onder meer om nieuwe AI-functies die door Google’s generatieve AI-model Gemini worden aangestuurd.

De Pixel 8, 8 Pro en 8a hebben al sinds het verschijnen een ingebouwde versie van Gemini: Gemini Nano. Het bedrijf voegt nu echter meer functionaliteiten toe aan de telefoons. Zo is Summarize in Recorder – dat samenvattingen kan maken van spraakopnames – verbeterd: de tool kan verschillende sprekers herkennen en hun namen weergeven, zodat in je transcript duidelijker is wie wat gezegd heeft. De transcripts kun je vervolgens exporteren als tekstbestand of Google Docs. De tool werkt op de Pixel 8, 8 Pro en 8a.

De verbeterde recorder-app op de Pixel 8 Pro. Foto: Google

De verbeterde recorder-app op de Pixel 8 Pro. Foto: Google

De camera in de telefoons wordt ook geavanceerder. Voortaan herkent het automatisch wat het beste moment is om een foto te maken in HDR+, zodat je altijd een goede foto hebt waarop je gezicht scherp is en je lacht. Daarnaast laat de camera op de Fold-, 6 Pro-, 7 Pro- en 8 Pro-modellen je voortaan handmatig kiezen welke cameralens je wil gebruiken als je een foto maakt.

De drie telefoonmodellen kunnen verder content op een groter scherm tonen, zoals je computerscherm. Daarvoor moet je de telefoon wel met het scherm verbinden via een USB-C-kabel. Daarnaast werkt Find My Device voortaan ook als het toestel uitstaat of de batterij leeg is – mits deze nog niet langer dan 23 uur leeg is. Deze functie wordt ook beschikbaar voor de Pixel Tablet, Pixel Fold, Pixel 6 en nieuwere modellen. En word je gebeld door een nummer dat je niet kent? Dan kun je nu ook direct vanuit je belgeschiedenis een ‘reverse lookup’ doen om te achterhalen van wie dat nummer is. Deze functie komt naar de Pixel Fold, de Pixel 6 en nieuwere modellen.

Functies voor de Pixel Watch

Ook de Pixel Watch krijgt een aantal nieuwe en verbeterde opties, zoals betere valdetectie voor als je van de fiets valt. Daarnaast verschijnt er een nieuwe veiligheidsfunctie genaamd Car Crash Detection, die alleen naar de Pixel Watch 2 komt. Deze functie vraagt je om even in te checken als je in een ernstig auto-ongeluk komt, om te zien of je oké bent. Reageer je niet of heb je hulp nodig, dan kan het horloge automatisch de hulpdiensten bellen. De optie werkt echter alleen op de 4G LTE-versie van het horloge, die in Nederland niet beschikbaar is.

Verder wordt het mogelijk om je PayPal-account toe te voegen aan je Google Wallet, zodat je met meer opties kunt afrekenen met je smartwatch. Ook maakt Google het eenvoudiger om je smart home-apparaten aan te sturen met je horloge. Je Home Favorites bereik je bijvoorbeeld met een enkele swipe en er komt een shortcurt om een specifiek apparaat te bereiken. Daarnaast krijgt de Google Home-app op Wear OS meer mogelijkheden om je smart home-apparaten aan te passen. Zo kun je de snelheid van je ventilator op je horloge aanpassen.

Google Home tile op de Pixel Watch 2

Foto: Google

Pixel Tablet verbindt met je deurbel

De Pixel Tablet krijgt meer mogelijkheden in combinatie met de Nest Doorbell. Als de tablet in hub mode staat, kan het je een snapshot geven van wie er aanbelt bij je deur. Je kunt vervolgens met diegene praten of een snelle reactie sturen.

De tablet krijgt daarnaast een nieuwe Google Home Favorites-widget, die je sneller toegang geeft tot compatibele smart home-apparaten.

Foto: Google

Lees Google brengt meer Gemini AI-functies naar Pixel 8, 8 Pro en 8a verder op Numrush

❌