Reading view

There are new articles available, click to refresh the page.

New AI Tools Are Promoted as Study Aids for Students. Are They Doing More Harm Than Good?

Once upon a time, educators worried about the dangers of CliffsNotes — study guides that rendered great works of literature as a series of bullet points that many students used as a replacement for actually doing the reading.

Today, that sure seems quaint.

Suddenly, new consumer AI tools have hit the market that can take any piece of text, audio or video and provide that same kind of simplified summary. And those summaries aren’t just a series of quippy text in bullet points. These days students can have tools like Google’s NotebookLM turn their lecture notes into a podcast, where sunny-sounding AI bots banter and riff on key points. Most of the tools are free, and do their work in seconds with the click of a button.

Naturally, all this is causing concern among some educators, who see students off-loading the hard work of synthesizing information to AI at a pace never before possible.

But the overall picture is more complicated, especially as these tools become more mainstream and their use starts to become standard in business and other contexts beyond the classroom.

And the tools serve as a particular lifeline for neurodivergent students, who suddenly have access to services that can help them get organized and support their reading comprehension, teaching experts say.

“There’s no universal answer,” says Alexis Peirce Caudell, a lecturer in informatics at Indiana University at Bloomington who recently did an assignment where many students shared their experience and concerns about AI tools. “Students in biology are going to be using it in one way, chemistry students are going to be using it in another. My students are all using it in different ways.”

It’s not as simple as assuming that students are all cheaters, the instructor stresses.

“Some students were concerned about pressure to engage with tools — if all of their peers were doing it that they should be doing it even if they felt it was getting in the way of their authentically learning,” she says. They are asking themselves questions like, “Is this helping me get through this specific assignment or this specific test because I’m trying to navigate five classes and applications for internships” — but at the cost of learning?

It all adds new challenges to schools and colleges as they attempt to set boundaries and policies for AI use in their classrooms.

Need for ‘Friction’

It seems like just about every week -— or even every day — tech companies announce new features that students are adopting in their studies.

Just last week, for instance, Apple released Apple Intelligence features for iPhones, and one of the features can recraft any piece of text to different tones, such as casual or professional. And last month ChatGPT-maker OpenAI released a feature called Canvas that includes slider bars for users to instantly change the reading level of a text.

Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi, says he is worried that students are lured by the time-saving promises of these tools and may not realize that using them can mean skipping the actual work it takes to internalize and remember the material.


Get EdSurge journalism delivered free to your inbox. Sign up for our newsletters.


“From a teaching, learning standpoint, that's pretty concerning to me,” he says. “Because we want our students to struggle a little bit, to have a little bit of friction, because that's important for their learning.”

And he says new features are making it harder for teachers to encourage students to use AI in helpful ways — like teaching them how to craft prompts to change the writing level of something: “It removes that last level of desirable difficulty when they can just button mash and get a final draft and get feedback on the final draft, too.”

Even professors and colleges that have adopted AI policies may need to rethink them in light of these new types of capabilities.

As two professors put it in a recent op-ed, “Your AI Policy Is Already Obsolete.”

“A student who reads an article you uploaded, but who cannot remember a key point, uses the AI assistant to summarize or remind them where they read something. Has this person used AI when there was a ban in the class?” ask the authors, Zach Justus, director of faculty development at California State University, Chico, and Nik Janos, a professor of sociology there. They note that popular tools like Adobe Acrobat now have “AI assistant” features that can summarize documents with the push of a button. “Even when we are evaluating our colleagues in tenure and promotion files,” the professors write, “do you need to promise not to hit the button when you are plowing through hundreds of pages of student evaluations of teaching?”

Instead of drafting and redrafting AI policies, the professors argue that educators should work out broad frameworks for what is acceptable help from chatbots.

But Watkins calls on the makers of AI tools to do more to mitigate the misuse of their systems in academic settings, or as he put it when EdSurge talked with him, “to make sure that this tool that is being used so prominently by students [is] actually effective for their learning and not just as a tool to offload it.”

Uneven Accuracy

These new AI tools raise a host of new challenges beyond those at play when printed CliffsNotes were the study tool du jour.

One is that AI summarizing tools don’t always provide accurate information, due to a phenomenon of large language models known as “hallucinations,” when chatbots guess at facts but present them to users as sure things.

When Bonni Stachowiak first tried the podcast feature on Google’s NotebookLM, for instance, she said she was blown away by how lifelike the robot voices sounded and how well they seemed to summarize the documents she fed it. Stachowiak is the host of the long-running podcast, Teaching in Higher Ed, and dean of teaching and learning at Vanguard University of Southern California, and she regularly experiments with new AI tools in her teaching.

But as she tried the tool more, and put in documents on complex subjects that she knew well, she noticed occasional errors or misunderstandings. “It just flattens it — it misses all of this nuance,” she says. “It sounds so intimate because it’s a voice and audio is such an intimate medium. But as soon as it was something that you knew a lot about it’s going to fall flat.”

Even so, she says she has found the podcasting feature of NotebookLM useful in helping her understand and communicate bureaucratic issues at her university — such as turning part of the faculty handbook into a podcast summary. When she checked it with colleagues who knew the policies well, she says they felt it did a “perfectly good job.” “It is very good at making two-dimensional bureaucracy more approachable,” she says.

Peirce Caudell, of Indiana University, says her students have raised ethical issues with using AI tools as well.

“Some say they’re really concerned about the environmental costs of generative AI and the usage,” she says, noting that ChatGPT and other AI models require large amounts of computing power and electricity.

Others, she adds, worry about how much data users end up giving AI companies, especially when students use free versions of the tools.

“We're not having that conversation,” she says. “We're not having conversations about what does it mean to actively resist the use of generative AI?”

Even so, the instructor is seeing positive impacts for students, such as when they use a tool to help make flashcards to study.

And she heard about a student with ADHD who had always found reading a large text “overwhelming,” but was using ChatGPT “to get over the hurdle of that initial engagement with the reading and then they were checking their understanding with the use of ChatGPT.”

And Stachowiak says she has heard of other AI tools that students with intellectual disabilities are using, such as one that helps users break down large tasks into smaller, more manageable sub-tasks.

“This is not cheating,” she stresses. “It’s breaking things down and estimating how long something is going to take. That is not something that comes naturally for a lot of people.”

© art.em.po / Shutterstock

New AI Tools Are Promoted as Study Aids for Students. Are They Doing More Harm Than Good?

What Can AI Chatbots Teach Us About How Humans Learn?

Do new AI tools like ChatGPT actually understand language the same way that humans do?

It turns out that even the inventors of these new large language models are debating that very question — and the answer will have huge implications for education and for all aspects of society if this technology can get to a point where it achieves what is known as Artificial General Intelligence, or AGI.

A new book by one of those AI pioneers digs into the origins of ChatGPT and the intersection of research on how the brain works and building new large language models for AI. It’s called “ChatGPT and the Future of AI,” and the author is Terrence Sejnowski, a professor of biology at the University of California, San Diego, where he co-directs the Institute for Neural Computation and the NSF Temporal Dynamics of Learning Center. He is also the Francis Crick Chair at the Salk Institute for Biological Studies.


Get EdSurge journalism delivered free to your inbox. Sign up for our newsletters.


Sejnowski started out as a physicist working on the origins of black holes, but early in his career he says he realized that it would be decades before new instruments could be built that could adequately measure the kinds of gravitational waves he was studying. So he switched to neuroscience, hoping to “pop the hood” on the human brain to better understand how it works.

“It seemed to me that the brain was just as mysterious as the cosmos,” he tells EdSurge. “And the advantage is you can do experiments in your own lab, and you don’t have to have a satellite.”

“What has really been revealed is that we don't understand what ‘understanding’ is,”
— Terrence Sejnowski

For decades, Sejnowski has focused on applying findings from brain science to building computer models, working closely at times with the two researchers who just won the Nobel Prize this year for their work on AI, John Hopfield and Geoffrey Hinton.

These days, computing power and algorithms have advanced to the level where neuroscience and AI are helping to inform each other, and even challenge our traditional understanding of what thinking is all about, he says.

“What has really been revealed is that we don't understand what ‘understanding’ is,” says Sejnowski. “We use the word, and we think we understand what it means, but we don't know how the brain understands something. We can record from neurons, but that doesn't really tell you how it functions and what’s really going on when you’re thinking.”

He says that new chatbots have the potential to revolutionize learning if they can deliver on the promise of being personal tutors to students. One drawback of the current approach, he says, is that LLMs focus on only one aspect of how the human brain organizes information, whereas “there are a hundred brain parts that are left out that are important for survival, autonomy for being able to maintain activity and awareness.” And it’s possible that those other parts of what makes us human may need to be simulated as well for something like tutoring to be most effective, he suggests.

The researcher warns that there are likely to be negative unintended consequences to ChatGPT and other technologies, just as social media led to the rise of misinformation and other challenges. He says there will need to be regulation, but that “we won't really know what to regulate until it really is out there and it's being used and we see what the impact is, how it's used.”

But he predicts that soon most of us will no longer use keyboards to interact with computers, instead using voice commands to have dialogues with all kinds of devices in our lives. “You’ll be able to go into your car and talk to the car and say, ‘How are you feeling today?’ [and it might say,] ‘Well, we're running low on gas.’ Oh, OK, where's the nearest gas station? Here, let me take you there.”

Listen to our conversation with Sejnowski on this week’s EdSurge Podcast, where he describes research to more fully simulate human brains. He also talks about his previous project in education, a free online course he co-teaches called “Learning How to Learn,” which is one of the most popular courses ever made, with more than 4 million students signed up over the past 10 years.

© K illustrator Photo / Shutterstock

What Can AI Chatbots Teach Us About How Humans Learn?

College ‘Deserts’ Disproportionately Deter Black and Hispanic Students from Higher Ed

In recent years, a growing body of research has looked at the impact of college ‘deserts’ — sometimes defined as an area where people live more than a 30-minute drive to a campus — and found that those residing close to a college are more likely to attend. But a new study shows that these higher education deserts affect some groups of students much differently than others.

The study, which looked at a rich set of high school and college data in Texas, found that Black and Hispanic students and those in low-income families who lived more than 30 miles from a public two-year college were significantly less likely to attend college. But white and Asian students in those same communities were slightly more likely than other students in the state to complete four-year degrees, meaning that the lack of a nearby two-year option seemed to increase the likelihood of moving away to attend college.

“While all students who live in a community college desert are less likely to complete an associate’s degree, their alternative enrollment and degree completion outcomes vary sharply by race-ethnicity and [socioeconomic status],” the study finds. In other words, for low-income and underrepresented minority groups, living near a community college can be a crucial way to gain access to any higher education. Meanwhile, such proximity might lead students in other groups to attend two-year college rather than pursue a four-year degree.

The results are particularly important at a time when more colleges are struggling to remain open, says Riley Acton, an assistant professor of economics at Miami University in Ohio and one of the researchers who worked on the new study.

“If you don't have a car in rural Texas, that's going to be a very hard barrier to overcome” without some sort of help.
— Riley Acton, an assistant professor of economics at Miami University in Ohio

“If a public institution in particular, let's say a public community college, is thinking about closing, or is thinking about merging, or is thinking about opening a new campus or consolidating campuses,” she says, “they should be mindful about who the students are that live near those different campuses.”

The researchers also suggest that colleges should consider providing transportation options or credits to students living in college deserts. “If you don't have a car in rural Texas, that's going to be a very hard barrier to overcome” without some sort of help, Acton notes.

Novel Finding

Meanwhile, Black and Hispanic students are more likely than those in other groups to live in a college desert, according to research by Nicholas Hillman, a professor of educational policy at the University of Wisconsin at Madison who was one of the first researchers to draw attention to the effects of college location on educational attainment, back in 2016.

In an interview with EdSurge, Hillman says that the implications of Acton’s new study are “really interesting,” adding that it is probably the largest quantitative study to take on the question of how college deserts affect different groups differently.

“It makes clear that, ‘Wait a minute, distance is different for different groups of students,’” Hillman says.

One takeaway for Hillman is the importance of making the transfer process from two-year colleges to four-year institutions more frictionless, so that students who live near two-year colleges who are more likely to start there have ample opportunity to go on to get a four-year degree.

Hillman says that he began looking at geography out of frustration with an emphasis during the Obama administration on providing consumer information about higher education as a solution to college access. For instance, one major initiative started during that time was the College Scorecard, which provides information on college options based on various government datasets.

“The dominant narrative was, ‘If students just have better info about where to go to college, more would go,’” he says. “I said, ‘This is bananas. This is not how it works.’”

He grew up in northern Indiana, where the nearest college is 40 miles away. For people he knew there, information about college was not what was keeping them from enrolling. “If you don’t have a job, you’re not going to be spending all this money on gas to go to college,” he says.

© NayaDadara / Shutterstock

College ‘Deserts’ Disproportionately Deter Black and Hispanic Students from Higher Ed

How Are School Smartphone Bans Going?

Angela Fleck says this was the typical scene last year in the sixth grade social studies classes she teaches at Glover Middle School in Spokane, Washington: Nearly every student had a smartphone, and many of them would regularly sneak glances at the devices, which they kept tucked behind a book or just under their desks.

“They're pretty sneaky, so you wouldn't always know that that was the reason,” says Fleck. “But over time, I'd realize no matter how engaging my lesson was, when it was time to turn and do the group activity or the assignment — something that wasn't totally me directing the class — there would be a large number of students that had no idea what we were doing.”

What students were doing with their phones, she says, was most often using Snapchat or other social media or texting with students in other classrooms, which she described as creating drama: “And then it would just spread rapid-fire, whatever the situation was, and it would sometimes result in altercations — meeting up at a certain place, and they'd arrange it all day on the phone.”


Get EdSurge journalism delivered free to your inbox. Sign up for our newsletters.


This year, though, the vibe has changed. Spokane Public Schools issued a new districtwide policy that bans the use of smartphones or smartwatches in classrooms during instructional time. So now students in elementary and middle schools have to keep devices off and put away during the school day, though high school students can use their smartphones or watches between classes and at lunch.

Now, she says, she feels like she has most students’ attention during classes since she no longer has to compete with buzzing devices. “In general, students are ready to learn,” she says. “As a teacher, I need to make sure that I have an engaging lesson that will keep their attention and help them to learn and help them to continue to want to be engaged.” And she says there are fewer fights at the school, too.

The district is one of many across the country that have instituted new smartphone bans this year, in the name of increasing student engagement and counteracting the negative effects that social media has on youth mental health. And at least four states — Indiana, Louisiana, South Carolina and Florida — have enacted statewide bans limiting school smartphone access.

For this week’s EdSurge Podcast, we set out to get a sense of how the bans are going. To do that, we talked with Fleck, as well as a high school teacher in Indiana, where a new statewide law bans smartphones and other wireless devices in schools during instructional time.

Fleck is a fan of the ban, and says she hopes the school never goes back to the old approach. But she admits that she misses some aspects of having phones available to integrate in a lesson when needed.

In the past, for instance, she allowed students to take pictures with their phones of the slides she was showing. And she would often designate a student as a researcher during lessons who could look up related material online and share with the group. Now she’s finding ways to adapt to keep those positive aspects of online access, she says, such as having student researchers use a computer in the classroom, or to make more use of the school-issued laptops for some lessons.

Adam Swinyard, the superintendent of Spokane Public Schools, acknowledges that there are trade-offs to the new ban when it comes to the use of tech in instruction.

“We absolutely have lost some power of the opportunity that those devices provide, whether that's, ‘I can really quickly look something up,’ or ‘I can quickly participate in a class poll’ or ‘I can tune my music instrument,’” he told EdSurge. “But I think where we landed in our community, for our schools and for our kids, is what we gain in their level of engagement and ability to focus far outweighs what we're losing in a device being a powerful pedagogical tool inside of the classroom. But I think it's important to acknowledge.”

What they end up teaching students, he argues, is more important. The mantra for the district is that there is a “time and place” for smartphone use, says Swinyard, and that a classroom is not the right setting or occasion, just as he wouldn’t pull out his phone and write a text while he was being interviewed for this article, or sitting in an important meeting.

Some schools with new bans have faced pushback from students, especially where there has been a zero-tolerance for phones even during social time. At a Jasper High School in Plano, Texas, for instance, more than 250 people signed a petition calling on the principal to revise a new ban on smartphones, which forbids use of devices all day, even during lunch and in the halls between classes. “Before the restricted use of cellphones was prohibited, they were a social link, connecting students during lunch and hallway breaks,” the petition reads.

And some parents have complained about the new bans, out of concerns that they would not be able to reach their children in the event of an emergency, such as a school shooting. A new survey by the Pew Research Center found that about 7 in 10 Americans support cellphone bans during class, while only about a third favor an all-day ban.

So one takeaway is that how schools design their smartphone restrictions — and how they communicate the policies to students and parents — are important for how well they work in practice.

Hear more about the pros and cons of new smartphone bans on this week’s EdSurge Podcast on Spotify, Apple Podcasts, or on the player below.

© Spokane Public Schools website

How Are School Smartphone Bans Going?

As the Job Market Changes, Is a College Degree Less of a ‘Meal Ticket’ Than in the Past?

When Gina Petersen graduated with her associate degree from Kirkwood Community College two years ago, she described it as “the biggest accomplishment I have ever done.”

As a returning adult college student, she had struggled to fit her studies in part time, online, while working as a trainer for a tech company. She had gotten that job through connections, and she hoped that a college degree would be a big help if she ever needed to find a new job in the future.

We told the story of Petersen’s college journey — which took her more than seven years and a couple of false starts to complete — as part of a three-part podcast series we did in 2022 called Second Acts.


Get EdSurge news delivered free to your inbox. Sign up for our newsletters.


For this week’s episode of the EdSurge Podcast, we checked back in with Petersen to see what the degree has meant for her professional and personal life.

And we found that the credential has not opened as many doors as she had hoped.

A few months after we last talked to Gina, she got laid off from her training job after 10 years at the company. And at first she quickly found a project manager position through her networks. But she felt the job wasn’t a good fit, so she quit after a little more than a year, hoping she’d quickly find another position.

What she encountered, however, was a job market that suddenly felt much more daunting.

“I’ve sent my resume to, I’d say, 150 different places for 150 different roles, and yet, nothing,” she says, even after getting professional help crafting her resume.

What’s worse, she says, she has been ghosted by employers when she does get initial interest. “I’ve had two people reach out for phone interviews and say, ‘Yes’ and confirm, and then I literally don’t get called,” she says.

Petersen is not alone, according to labor market experts.

Guy Berger, director of economic research at the Burning Glass Institute, notes that because it has become easier to apply for jobs, thanks to one-click applications on company websites and the growth of platforms like Linkedin, job seekers have more opportunities than ever. But they also have to work harder to find the right fit as a result. Whereas once it might be common to apply to 15 jobs, now it’s not unusual to have to apply to more than 150, he says.

“Now, you’re applying to a lot more things – you’re getting more cracks at the bat — but you’re just getting a lot more rejections,” Berger says.

That can feel demoralizing to job candidates, he adds, while also hard for employers as they struggle to sift through a flood of applicants.

Meanwhile, Berger says that the number of jobs for recent graduates has fallen in recent years, and just having a degree is not as guaranteed a “meal ticket” as in the past.

“College graduates still get generally better-paying jobs than people who don’t have a college degree, and there’s a wider range of opportunities available to them when they’re looking for a job,” he says. “But if you’re looking at how much of a boost it provides, probably it’s smaller than it was in the past.”

Even so, Petersen says she is glad she got her degree, as she learned valuable skills in college that she put to use in her job. But she isn’t looking to go back for more higher education at this point.

Hear more about Petersen’s search, trends in hiring and what colleges can do to respond to this changing landscape on this week’s EdSurge Podcast.

Check out the episode on Spotify, Apple Podcasts, or on the player below.

© GoodStudio / Shutterstock

As the Job Market Changes, Is a College Degree Less of a ‘Meal Ticket’ Than in the Past?

If Smart Glasses Are Coming, What Will That Mean for Classrooms?

When Meta held its annual conference at the end of September, the tech giant announced it is betting that the next wave of computing will come in the form of smart eyeglasses.

Mark Zuckberberg, Meta’s founder and CEO, held up what he described as the first working prototype of Orion, which lets wearers see both the physical world and a computer display hovering in the field of vision.

“They’re not a headset,” he said on stage as he announced the device, which looked like a set of unusually chunky eyeglasses. “This is the physical world with holograms overlaid on it.”

For educators, this might not come as welcome news.

After all, one of the hottest topics in edtech these days is the growing practice of banning smartphones in schools, after teachers have reported that the devices distract students from classroom activities and socializing in person with others. And a growing body of research, popularized by the Jonathan Haidt book “The Anxious Generation,” argues that smartphone and social media use harms the mental health of teenagers.

When it’s proving hard enough to regulate the appropriate use of smartphones, what will it be like to manage a rush of kids wearing computers on their faces?

Some edtech experts see upsides, though, when the technology is ready to be used for educational activities.

The idea of using VR headsets to enter an educational multiverse — the last big idea Meta was touting when it changed its corporate name three years ago from Facebook — hasn’t caught on widely, in part because getting a classroom full of students fitted with headsets and holding controllers can be difficult for teachers (not to mention expensive to obtain all that gear). But if smart glasses become cheap enough for a cart to be wheeled in with enough pairs for each student, so they can all do some activity together that blends the virtual world with in-person interactions, they could be a better fit.

“Augmented reality allows for more sharing and collaborative work than VR,” says Maya Georgieva, who runs an innovation center for VR and AR at The New School in New York City. “Lots of these augmented reality applications build on the notion of active learning and experiential learning naturally.”

And there is some initial research that has found that augmented reality experiences in education can lead to improvements in learning outcomes since, as one recent research paper put it, “they transform the learning process into a full-body experience.”

Cheating Glasses?

The Orion glasses that Zuckerberg previewed last week are not ready for prime time — in fact the Meta CEO said they won’t be released to the general public until 2027.

(EdSurge receives philanthropic support from the Chan-Zuckerberg Initiative, which is co-owned by Meta’s CEO. Learn more about EdSurge ethics and policies here and supporters here.)

But the company already sells smart eyeglasses through a partnership with sunglass-maker Ray-Ban, which are now retailing for around $300. And other companies make similar products as well.

These gadgets, which have been on the market for a couple of years in some form, don’t have a display. But they do have a small built-in computer, a camera, a microphone and speakers. And recent advances in AI mean that newer models can serve as a talking version of a chatbot that users can access when they’re away from their computer or smartphone.

While so far the number of students who own smart glasses appears low, there have already been some reports of students using smart glasses to try to cheat.

This year in Tokyo, for instance, an 18-year-old allegedly used smart glasses to try to cheat on a university entrance exam. He apparently took pictures of his exam questions, posted them online during the test, and users on X, formerly Twitter, gave him the answers (which he could presumably hear read to him on his smart glasses). He was detected and his test scores were invalidated.

Meanwhile, students are sharing videos on TikTok where they explain how to use smart glasses to cheat, even low-end models that have few “smart” features.

“Using these blue light smart glasses on a test would be absolutely diabolical,” says one TikTok user’s video, describing a pair of glasses that can simply pair with a smartphone by bluetooth and cost only about $30. “They look like regular glasses, but they have speakers and microphones in them so you can cheat on a test. So just prerecord your test or your answers or watch a video while you're at the test and just listen to it and no one can tell that you’re looking or listening to anything.”

On Reddit discussions, professors have been wondering whether this technology will make it even harder to know whether the work students are doing is their own, compounding the problems caused by ChatGPT and other new AI tools that have given students new ways to cheat on homework that are difficult to detect.

One commenter even suggested just giving up on doing tests and assignments and trying to find new ways of assessing student knowledge. “I think we have too many assessments that have limited benefit and no one here wants to run a police state to check if students actually did what they say they did,” the user wrote. “I would appreciate if anyone has a functional viable alternative to the current standard. The old way will benefit the well off and dishonest, while the underprivileged and moral will suffer (not that this is new either).”

Some of the school and state policies that ban smartphones might also apply to these new smart glasses. A state law in Florida, for instance, restricts the use of “wireless communication devices,” which could include glasses, watches, or any new gadget that gets invented that connects electronically.

“I would compare it very much to when smartphones really came on the scene and became a regular part of our everyday lives,” says Kyle Bowen, a longtime edtech expert who is now deputy chief information officer at Arizona State University, noting that these glasses might impact a range of activities if they catch on, including education.

There could be upsides in college classrooms, he predicts.

The benefit he sees for smart glasses is the pairing of AI and the devices, so that students might be able to get real-time feedback about, say a lab exercise, by asking the chatbot to weigh in on what it sees through the camera of the glasses as students go about the task.

© Screenshot from Meta video

If Smart Glasses Are Coming, What Will That Mean for Classrooms?

Looking Back on the Long, Bumpy Rise of Online College Courses

When Robert Ubell first applied for a job at a university's online program back in the late ’90s, he had no experience with online education. But then again, hardly anyone else did either.

First of all, the web was still relatively new back then (something like the way AI chatbots are new today), and only a few colleges and universities were even trying to deliver courses on it. Ubell’s experience was in academic publishing, and he had recently finished a stint as the American publisher of Nature magazine and was looking for something different. He happened to have some friends at Stanford University who had shown him what the university was doing using the web to train workers at local factories and high-tech businesses, and he was intrigued by the potential.

So when he saw that Stevens Institute of Technology had an opening to build online programs, he applied, citing the weekend he spent observing Stanford’s program.

“That was my only background, my only experience,” he says, “and I got the job.”

And as at many college campuses at the time, Ubell faced resistance from the faculty.

“Professors were totally opposed,” he says, fearing that the quality would never be as good as in-person teaching.

The story of how higher ed went from a reluctant innovator to today — when more than half of American college students take at least one online course — offers plenty of lessons for how to try to bring new teaching practices to colleges.

One big challenge that has long faced online learning is who will pay the costs of building something new, like a virtual campus.

Ubell points to philanthropic foundations as key to helping many colleges, including Stevens, take their first steps into online offerings.

And it turns out that the most successful teachers in the new online format weren’t ones who were the best with computers or the most techy, says Frank Mayadas, who spent 17 years at the Alfred P. Sloan Foundation giving out grants hoping to spark adoptions of online learning.

“It was the faculty who had a great conviction to be good teachers who were going to be good no matter how they did it,” says Mayadas. “If they were good in the classroom, they were usually good online.”

We dig into the bumpy history of online higher education on this week’s EdSurge Podcast. And we hear what advice online pioneers have for those trying the latest classroom innovations.

Check out the episode on Spotify, Apple Podcasts, or on the player below.

© KELENY / Shutterstock

Looking Back on the Long, Bumpy Rise of Online College Courses

Inside an Effort to Build an AI Assistant for Designing Course Materials

There’s a push among AI developers to create an AI tutor, and some see that as a key use case for tools like ChatGPT. But one longtime edtech expert sees an even better fit for new AI chatbots in education: helping educators design course materials for their students.

So all year Michael Feldstein has been leading a project to build an AI assistant that’s focused on learning design.

After all, these days colleges and other education institutions are hiring a growing number of human instructional designers to help create or improve teaching materials — especially as colleges have developed more online classes and programs. And people in those roles follow a playbook for helping subject-matter experts (the teachers they work with) organize their material into a series of compelling learning activities that will get students the required knowledge and skills on a given subject. Feldstein thinks new AI chatbots might be uniquely suited to guiding instructors through the early stages of that learning-design process.

He calls his system the AI Learning Design Assistant, or ALDA. And for months he has been leading a series of workshops through which more than 70 educators have tried versions of the tool and given feedback. He says he’s built a new version of the system about every month for the past five months incorporating the input he’s received. He argues that if AI could serve as an effective instructional design assistant, it could help colleges significantly reduce the time it takes to create courses.

Feldstein is not completely convinced it will work, though, so he says he has invited plenty of people to test it who are skeptical of the idea.

“The question is, can AI do that?” he says. “Can we create an AI learning design assistant that interviews the human educator, asks the questions and gathers the information that the educator has in their heads about the important elements of the teaching interaction and then generates a first draft?”

EdSurge has been checking in with Feldstein over the past few months as he’s gone through this design process. And he’s shared what has gone well — and where early ideas fell flat. You can hear highlights of those conversations on this week’s EdSurge Podcast.

Even if it turns out that AI isn’t a fit to help build courses, Feldstein says the project is yielding lessons about where generative AI tools can help educators do their jobs better.

Check it out on Spotify, Apple Podcasts, or on the player below.

© Nichcha / Shutterstock

Inside an Effort to Build an AI Assistant for Designing Course Materials

To Address the ‘Homework Gap,’ Is It Time to Revamp Federal Connectivity Programs?

One of the lessons of the COVID-19 pandemic was that many families didn’t have reliable internet access at home. As schools closed and classes moved online, educators rushed to improvise solutions for families without robust connections, setting up mobile Wi-Fi access points in school buses, sending home portable hot spots to those who needed it and more.

And even before the pandemic, educators were working to close the “homework gap,” the divide between students who can easily log on at home to access critical school materials and those who lack reliable home internet.

Now that schools are back open and pandemic relief funds are expiring, there’s a risk this gap will quickly widen unless policymakers take a fresh look at the nation’s connectivity. And it’s one that disproportionately affects students of color and those in underserved communities.

That’s the argument made by Nicol Turner Lee, director of the Brookings Institution’s Center for Technology Innovation, in her new book, “Digitally Invisible: How the Internet Is Creating the New Underclass.

“The truth is that most of these programs created during the pandemic relied on philanthropic and private sector support and continue to do so,” she writes of efforts to make sure students have online access for schoolwork. She calls for new federal legislation to “make these programs less vulnerable to political changes.”

The largest federal program offering support for school districts and libraries for internet connections, the E-rate, was created nearly 30 years ago. Back then much of today’s crucial technology for living and learning had not yet been invented — including smartphones, social media and AI chatbots. “It's been too long that we've kept these same policies in place,” Turner Lee told EdSurge. “We need ways we can guarantee support to schools for the type of infrastructure they need.”

EdSurge connected with Turner Lee for this week’s EdSurge Podcast. The sociologist shared her experiences traveling around the country — to stops including Marion, Alabama, West Phoenix, Arizona, and Hartford, Connecticut — asking people to share how they get connected and the challenges to digital access they face.

Check it out on Spotify, Apple Podcasts, or on the player below.

© Darko 1981 / Shutterstock

To Address the ‘Homework Gap,’ Is It Time to Revamp Federal Connectivity Programs?

How Rising Higher Ed Costs Change Student Attitudes About College

ST. PAUL, Minn. — At the end of each school year at Central High School, seniors grab a paint pen and write their post-graduation plans on a glass wall outside the counseling office.

For many, that means announcing what college they’ve enrolled in. But the goal is to celebrate whatever path students are choosing, whether at a college or not.

“We have a few people that are going to trade school, we have a few people that are going to the military, a few people who wrote ‘still deciding,’” said Lisa Beckham, a staffer for the counseling center, as she helped hand out markers in May as the school year was winding down. Others, she said, are heading straight to a job.

Talking to the students as they signed, it was clear that one factor played an outsized role in the choice: the high cost of college.

“I’m thinking about going to college in California, and my grandparents all went there for a hundred dollars a semester and went into pretty low-paying jobs, but didn't spend years in debt because it was easy to go to college,” said Maya Shapiro, a junior who was there watching the seniors write up their plans. “So now I think it is only worth going to college if you're going to get a job that's going to pay for your college tuition eventually, so if you’re going to a job in English or history you might not find a job that’s going to pay that off.”

When I told her I was an English major back in my own college years, she quickly said, “I’m sorry.”

Even students going to some of the most well-known colleges are mindful of cost.

Harlow Tong, who was recruited by Harvard University to run track, said he had planned to go to the University of Minnesota and is still processing his decision to join the Ivy League.

“After the decision it really hit me that it's really an investment, and every year it feels like it's getting less and less worth the cost,” he said.

A new book lays out the changing forces shaping what students are choosing after high school, and argues for a change in the popular narrative around higher education.

The book is called “Rethinking College,” by longtime journalist and Los Angeles Times opinion writer Karin Klein. She calls for an end to “degree inflation,” where jobs require a college degree even if someone without a degree could do the job just as well. And she advocates for more high school graduates to take gap years to find out what they want to do before enrolling in college, or to seek out apprenticeships in fields that may not need college.

But she admits the issue is complicated. She said one of her own daughters, who is now 26, would have benefitted from a gap year. “The problem was the cost was a major factor,” Klein told me. “She was offered huge financial aid by a very good school, and I said, ‘We don’t know if you take a gap year if that offer is going to be on the table. And I can’t afford this school without that offer.’”

Hear more from Klein, including about programs she sees as models for new post-grad options, as well as from students at Central High School, on this week’s EdSurge Podcast. Check it out on Spotify, Apple Podcasts, or on the player below. It’s the latest episode of our Doubting College podcast series.

Get episode reminders and show notes in your inbox. Sign up for the EdSurge Podcast newsletter.

© Photo by Jeffrey R. Young for EdSurge

How Rising Higher Ed Costs Change Student Attitudes About College

How Is Axim Collaborative Spending $800 Million From the Sale of EdX?

One of the country’s richest nonprofits focused on online education has been giving out grants for more than a year. But so far, the group, known as Axim Collaborative, has done so slowly — and pretty quietly.

“There has been little buzz about them in digital learning circles,” says Russ Poulin, executive director of WCET, a nonprofit focused on digital learning in higher education. “They are not absent from the conversation, but their name is not raised very often.”

Late last month, an article in the online course review site Class Central put it more starkly, calling the promise of the nonprofit “hollow.” The op-ed, by longtime online education watcher Dhawal Shah, noted that according to the group’s most recent tax return, Axim is sitting on $735 million and had expenses of just $9 million in tax year 2023, with $15 million in revenue from investment income. “Instead of being an innovator, Axim Collaborative seems to be a non-entity in the edtech space, its promises of innovation and equity advancement largely unfulfilled,” Shah wrote.

The group was formed with the money made when Harvard University and MIT sold their edX online platform to for-profit company 2U in 2021 for about $800 million. At the time many online learning leaders criticized the move, since edX had long touted its nonprofit status as differentiating it from competitors like Coursera. The purchase did not end up working out as planned for 2U, which this summer filed for bankruptcy.

So what is Axim investing in? And what are its future plans?

EdSurge reached out to Axim’s CEO, Stephanie Khurana, to get an update.

Not surprisingly, she pushed back on the idea that the group is not doing much.

“We’ve launched 18 partnerships over the past year,” she says, noting that many grants Axim has awarded were issued since its most recent tax return was filed. “It’s a start, and it’s seeding a lot of innovations. And that to me is very powerful.”

One of the projects she says she is most proud of is Axim’s work with HBCUv, a collaboration by several historically Black colleges to create a shared technology platform and framework to share online courses across their campuses. While money was part of that, Khurana says she is also proud of the work her group did helping set up a course-sharing framework. Axim also plans to help with “incorporating student success metrics in the platform itself,” she says, “so people can see where they might be able to support students with different kinds of advising and different kinds of student supports.”

The example embodies the group’s philosophy of trying to provide expertise and convening power, rather than just cash, to help promising ideas scale to support underserved learners in higher education.

Listening Tour

When EdSurge talked with Khurana last year, she stressed that her first step would be to listen and learn across the online learning community to see where the group could best make a difference.

One thing that struck her as she did that, she says, is “hearing what barriers students are facing, and what's keeping them from persisting through their programs and finding jobs that match with their skills and being able to actually realize better outcomes.”

Grant amounts the group has given out so far range from around $500,000 for what she called “demonstration projects” to as much as $3 million.

Artificial intelligence has emerged as a key focus of Axim’s work, though Khurana says the group is treading gingerly.

“We are looking very carefully at how and where AI is beneficial, and where it might be problematic, especially for these underserved learners,” she says. “And so trying to be clear-eyed about what those possibilities are, and then bring to bear the most promising opportunities for the students and institutions that we're supporting.”

One specific AI project the group has supported is a collaboration between Axim, Campus Evolve, University of Central Florida and Indiana Tech to explore research-based approaches to using AI to improve student advising. “They're developing an AI tool to have a student-facing approach to understanding, ‘What are my academic resources? What are career-based resources?,’” she says. “A lot of times those are hard to discern.”

Another key work of Axim involves keeping up an old system rather than starting new ones. The Axim Collaborative manages the Open edX platform, the open-source system that hosts edX courses and can also be used by any institution with the tech know-how and the computer servers to run it. The platform is used by thousands of colleges and organizations around the world, including a growing number of governments, who use it to offer online courses.

Anant Agarwal, who helped found edX and now works at 2U to coordinate its use, is also on a technical committee for Open edX.

He says the structure of supporting Open edX through Axim is modeled on the way the Linux open-source operating system is managed.

While edX continues to rely on the platform, the software is community-run. “There has to be somebody that maintains the repositories, maintains the release schedule and provides funding for certain projects,” Agarwal says. And that group is now Axim.

When the war in Ukraine broke out, Agarwal says, the country “turned on a dime and the universities and schools started offering courses on Open edX.”

Poulin, of WCET, says that it’s too early to say whether Axim’s model is working.

“While their profile and impact may not be great to this point, I am willing to give startups some runway time to determine if they will take off,” he says, noting that “Axim is, essentially, still a startup.”

His advice: “A creative, philanthropic organization should take some risks if they are working in the ‘innovation’ sphere. We learn as much from failures as successes.”

For Khurana, Axim’s CEO, the goal is not to find a magic answer to deep-seated problems facing higher education.

“I know some people want something that will be a silver bullet,” she says. “And I think it's just hard to come by in a space where there's a lot of different ways to solve problems. Starting with people on the ground who are doing the work — [with] humility — is probably one of the best ways to seed innovations and to start.”

© Mojahid Mottakin / Shutterstock

How Is Axim Collaborative Spending $800 Million From the Sale of EdX?

When the Teaching Assistant Is an AI ‘Twin’ of the Professor

Two instructors at Vilnius University in Lithuania brought in some unusual teaching assistants earlier this year: AI chatbot versions of themselves.

The instructors — Paul Jurcys and Goda Strikaitė-Latušinskaja — created AI chatbots trained only on academic publications, PowerPoint slides and other teaching materials that they had created over the years. And they called these chatbots “AI Knowledge Twins,” dubbing one Paul AI and the other Goda AI.

They told their students to take any questions they had during class or while doing their homework to the bots first before approaching the human instructors. The idea wasn’t to discourage asking questions, but rather to nudge students to try out the chatbot doubles.


Would you use an AI teaching assistant? Share your thoughts.


“We introduced them as our assistants — as our research assistants that help people interact with our knowledge in a new and unique way,” says Jurcys.

Experts in artificial intelligence have for years experimented with the idea of creating chatbots that can fill this support role in classrooms. With the rise of ChatGPT and other generative AI tools, there’s a new push to try robot TAs.

“From a faculty perspective, especially someone who is overwhelmed with teaching and needs a teaching assistant, that's very attractive to them — then they can focus on research and not focus on teaching,” says Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing.

But just because Watkins thought some faculty would like it doesn’t mean he thinks it’s a good idea.

“That's exactly why it's so dangerous too, because it basically offloads this sort of human relationships that we're trying to develop with our students and between teachers and students to an algorithm,” he says.

On this week’s EdSurge Podcast, we hear from these professors about how the experiment went — how it changed classroom discussion but sometimes caused distraction. A student in the class, Maria Ignacia, also shares her view on what it was like to have chatbot TAs.

And we listen in as Jurcys asks his chatbot questions — and admits the bot puts things a bit differently than he would.

Listen to the episode on Spotify, Apple Podcasts, or on the player on this page.

When the Teaching Assistant Is an AI ‘Twin’ of the Professor

Not All ‘Free College’ Programs Spark Increased Enrollments or More Degrees

The premise of “free college” programs popping up around the country in recent years is that bringing the price of higher education down to nearly nothing will spur more students to enroll and earn degrees.

But is that what actually happens?

David Monaghan, an associate professor of sociology at Shippensburg University of Pennsylvania, has been digging into that question in a series of recent research studies. And the results indicate that not all of these free college programs have the intended effect — and that how a program is set up can make a big difference.

In a working paper the professor co-authored that was released last month, for instance, Monaghan compared two free college programs in Pennsylvania to dig into their outcomes.

One of the programs is the Morgan Success Scholarship at Lehigh Carbon Community College, which is available to students at Tamaqua Area High School who enroll right after completing their high school degree. Qualifying students are guaranteed fully paid tuition, with the program paying any gap left after the student applies for other financial aid and scholarships (a model known as a “last dollar, tuition-only guarantee.”)

The other is the Community College of Philadelphia’s 50th Anniversary Scholars Program, which is available to students who graduate from a high school in Philadelphia and meet other merit criteria. It is also a “last dollar” program that covers any tuition and fees not paid from other sources. The students must enroll immediately after high school graduation, have a low enough income to qualify for a federal Pell scholarship, file their application for federal financial aid by a set date and enroll in at least six credits at the college.

The Morgan Success Scholarship seemed to work largely as its designers hoped. The year after the program started, the rate of college-going at Tamaqua Area High School jumped from 86 percent to 94 percent, and college-going increased another percentage point the following year. And the number of students graduating from Lehigh Carbon Community College with a two-year degree increased after the program was created.

But something else happened that wasn’t by design. The free-college program appears to have led some students who would have enrolled in a four-year college to instead start at the two-year college — where they may or may not end up going on to a four-year institution. There is a chance, then, that the program may end up keeping some students from finishing a four-year degree. “On balance, the program expands access to postsecondary education more than it diverts students away from four-year degrees, though it does appear to do this as well,” the paper asserts.

The free-college program at Community College of Philadelphia, meanwhile, didn’t seem to move the needle much at all.

“I expected to see an enrollment boost, and I didn’t even see that,” says Monaghan.

In other words, it isn’t even clear from the data that the free college effort sparked any increase in enrollment at the college.

The reason, he says, may be that the leaders of the program did not do enough to spread awareness about the option, and about what it takes to apply. Since the program was open to all high schools in the city, doing that communication was more difficult than in the case of the other program they studied.

“Our analyses suggest that a tuition guarantee, by itself, will not necessarily have any impact,” he and his co-author wrote in their paper. “If a program falls in the forest and no one hears it, it will not shift enrollment patterns.”

Monaghan says that the findings show that more attention should be paid to the details of how free college programs work — especially since many of them are full of restrictions and require students to jump through a series of hoops to take advantage of them. That can be a lot to ask a 17- or 18-year-old finishing high school to navigate.

“We really overestimate what people are like at the end of high school,” and how savvy they’ll be about weighing the costs and benefits of higher education, he argues. “There hasn’t been enough research on free college programs in terms of how they are implemented and communicated,” he adds.

It’s worth noting, of course, that some free college programs do significantly increase enrollment. And that can create another unintended side effect: straining resources at two-year colleges.

That was the case in Massachusetts, where the MassReconnect program that launched in 2023 led more than 5,000 new students to enroll the first semester it was available, according to a report from the Massachusetts Department of Higher Education.

As a result, the state’s 15 community colleges have struggled to hire enough staff — including adjunct instructors — to keep up with the new demand.

What did that program do to spark so much interest? Unlike the programs studied in Pennsylvania, MassReconnect is available to not just people freshly graduating high school, but to anyone over 25 years old — a much larger pool of possible takers.

Another working paper by Monaghan, which looked at as much available research as he could find on free college programs, found a large variety of impact.

And that may be the biggest lesson: For free college programs, the devil really is in the details of how they are set up and communicated.

© Robert Reppert / Shutterstock

Not All ‘Free College’ Programs Spark Increased Enrollments or More Degrees

Should Educators Put Disclosures on Teaching Materials When They Use AI?

Many teachers and professors are spending time this summer experimenting with AI tools to help them prepare slide presentations, craft tests and homework questions, and more. That’s in part because of a huge batch of new tools and updated features that incorporate ChatGPT, which companies have released in recent weeks.

As more instructors experiment with using generative AI to make teaching materials, an important question bubbles up. Should they disclose that to students?

It’s a fair question given the widespread concern in the field about students using AI to write their essays or bots to do their homework for them. If students are required to make clear when and how they’re using AI tools, should educators be too?

When Marc Watkins heads back into the classroom this fall to teach a digital media studies course, he plans to make clear to students how he’s now using AI behind the scenes in preparing for classes. Watkins is a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing, an optional program for faculty.

“We need to be open and honest and transparent if we’re using AI,” he says. “I think it’s important to show them how to do this, and how to model this behavior going forward,” Watkins says.

While it may seem logical for teachers and professors to clearly disclose when they use AI to develop instructional materials, just as they are asking students to do in assignments, Watkins points out that it’s not as simple as it might seem. At colleges and universities, there's a culture of professors grabbing materials from the web without always citing them. And he says K-12 teachers frequently use materials from a range of sources including curriculum and textbooks from their schools and districts, resources they’ve gotten from colleagues or found on websites, and materials they’ve purchased from marketplaces such as Teachers Pay Teachers. But teachers rarely share with students where these materials come from.

Watkins says that a few months ago, when he saw a demo of a new feature in a popular learning management system that uses AI to help make materials with one click, he asked a company official whether they could add a button that would automatically watermark when AI is used to make that clear to students.

The company wasn’t receptive, though, he says: “The impression I've gotten from the developers — and this is what's so maddening about this whole situation — is they basically are like, well, ‘Who cares about that?’”

Many educators seem to agree: In a recent survey conducted by Education Week, about 80 percent of the K-12 teachers who responded said it isn’t necessary to tell students and parents when they use AI to plan lessons and most educator respondents said that also applied to designing assessments and tracking behavior. In open-ended answers, some educators said they see it as a tool akin to a calculator, or like using content from a textbook.

But many experts say it depends on what a teacher is doing with AI. For example, an educator may decide to skip a disclosure when they do something like use a chatbot to improve the draft of a text or slide, but they may want to make it clear if they use AI to do something like help grade assignments.

So as teachers are learning to use generative AI tools themselves, they’re also wrestling with when and how to communicate what they’re trying.

Leading By Example

For Alana Winnick, educational technology director at Pocantico Hills Central School District in Sleepy Hollow, New York, it’s important to make it clear to colleagues when she uses generative AI in a way that is new — and which people may not even realize is possible.

For instance, when she first started using the technology to help her compose email messages to staff members, she included a line at the end stating: “Written in collaboration with artificial intelligence.” That’s because she had turned to an AI chatbot to ask it for ideas to make her message “more creative and engaging,” she explains, and then she “tweaked” the result to make the message her own. She imagines teachers might use AI in the same way to create assignments or lesson plans. “No matter what, the thoughts need to start with the human user and end with the human user,” she stresses.

But Winnick, who wrote a book on AI in education called “The Generative Age: Artificial Intelligence and the Future of Education” and hosts a podcast by the same name, thinks putting in that disclosure note is temporary, not some fundamental ethical requirement, since she thinks this kind of AI use will become routine. “I don’t think [that] 10 years from now you’ll have to do that,” she says. “I did it to raise awareness and normalize [it] and encourage it — and say, ‘It’s ok.’”

To Jane Rosenzweig, director of the Harvard College Writing Center at Harvard University, whether or not to add a disclosure would depend on the way a teacher is using AI.

“These are totally different things,” he says. “As a student, you’re submitting your thing as a grade to be evaluated. The teachers, they know how to do it. They’re just making their work more efficient.”
— Pat Yongpradit, chief academic officer for Code.org and the leader of TeachAI

“If an instructor was to use ChatGPT to generate writing feedback, I would absolutely expect them to tell students they are doing that,” she says. After all, the goal of any writing instruction, she notes, is to help “two human beings communicate with each other.” When she grades a student paper, Rosenzweig says she assumes the text was written by the student unless otherwise noted, and she imagines that her students expect any feedback they get to be from the human instructor, unless they are told otherwise.

When EdSurge posed the question of whether teachers and professors should disclose when they’re using AI to create instructional materials to readers of our higher ed newsletter, a few readers replied that they saw doing so as important — as a teachable moment for students, and for themselves.

“If we're using it simply to help with brainstorming, then it might not be necessary,” said Katie Datko, director of distance learning and instructional technology at Mt. San Antonio College. “But if we're using it as a co-creator of content, then we should apply the developing norms for citing AI-generated content.”

Seeking Policy Guidance

Since the release of ChatGPT, many schools and colleges have rushed to create policies on the appropriate use of AI.

But most of those policies don’t address the question of whether educators should tell students how they’re using new generative AI tools, says Pat Yongpradit, chief academic officer for Code.org and the leader of TeachAI, a consortium of several education groups working to develop and share guidance for educators about AI. (EdSurge is an independent newsroom that shares a parent organization with ISTE, which is involved in the consortium. Learn more about EdSurge ethics and policies here and supporters here.)

A toolkit for schools released by TeachAI recommends that: “If a teacher or student uses an AI system, its use must be disclosed and explained.”

But Yongpradit says that his personal view is that “it depends” on what kind of AI use is involved. If AI is just helping to write an email, he explains, or even part of a lesson plan, that might not require disclosure. But there are other activities he says are more core to teaching where disclosure should be made, like when AI grading tools are used.

Even if an educator decides to cite an AI chatbot, though, the mechanics can be tricky, Yongpradit says. While there are major organizations including the Modern Language Association and the American Psychological Association that have issued guidelines on citing generative AI, he says the approaches remain clunky.

“That’s like pouring new wine into old wineskins,” he says, “because it takes a past paradigm for taking and citing source material and puts it toward a tool that doesn’t work the same way. Stuff before involved humans and was static. AI is just weird to fit it in that model because AI is a tool, not a source.”

For instance, the output of an AI chatbot depends greatly on how a prompt is worded. And most chatbots give a slightly different answer every time, even if the same exact prompt is used.

Yongpradit says he was recently attending a panel discussion where an educator urged teachers to disclose AI use since they are asking their students to do so, garnering cheers from students in attendance. But to Yongpradit, those situations are hardly equivalent.

“These are totally different things,” he says. “As a student, you’re submitting your thing as a grade to be evaluated. The teachers, they know how to do it. They’re just making their work more efficient.”

That said, “if the teacher is publishing it and putting it on Teachers Pay Teachers, then yes, they should disclose it,” he adds.

The important thing, he says, will be for states, districts and other educational institutions to develop policies of their own, so the rules of the road are clear.

“With a lack of guidance, you have a Wild West of expectations.”

© Leonardo Santtos / Shutterstock

Should Educators Put Disclosures on Teaching Materials When They Use AI?

An Education Chatbot Company Collapsed. Where Did the Student Data Go?

When Los Angeles Unified School District launched a districtwide AI chatbot nicknamed “Ed” in March, officials boasted that it represented a revolutionary new tool that was only possible thanks to generative AI — a personal assistant that could point each student to tailored resources and assignments and playfully nudge and encourage them to keep going.

But last month, just a few months after the fanfare of the public launch event, the district abruptly shut down its Ed chatbot, after the company it contracted to build the system, AllHere Education, suddenly furloughed most of its staff citing financial difficulties. The company had raised more than $12 million in venture capital, and its five-year contract with the LA district was for about $6 million over five years, about half of which the company had already been paid.

It’s not yet clear what happened: LAUSD officials declined interview requests from EdSurge, and officials from AllHere did not respond to requests for comment about the company’s future. A statement issued by the school district said “several educational technology companies are interested in acquiring” AllHere to continue its work, though nothing concrete has been announced.

A tech leader for the school district, which is the nation’s second-largest, told the Los Angeles Times that some information in the Ed system is still available to students and families, just not in chatbot form. But it was the chatbot that was touted as the key innovation — which relied on human moderators at AllHere to monitor some of the chatbot’s output who are no longer actively working on the project.

Some edtech experts contacted by EdSurge say that the implosion of the cutting-edge AI tool offers lessons for other schools and colleges working to make use of generative AI. Most of those lessons, they say, center on a factor that is more difficult than many people realize: the challenges of corralling and safeguarding data.

An Ambitious Attempt to Link Systems

When leaders from AllHere gave EdSurge a demo of the Ed chatbot in March, back when the company seemed thriving and had recently been named to a Time magazine list of the “World’s Top Edtech Companies of 2024,” company leaders were most proud of how the chatbot cut across dozens of tech tools that the school system uses.

“The first job of Ed was, how do you create one unified learning space that brings together all the digital tools, and that eliminates the high number of clicks that otherwise the student would need to navigate through them all?” the company’s then-CEO, Joanna Smith-Griffin, said at the time. (The LAUSD statement said she is no longer with the company.)

Such data integration had not previously been a focus of the company, though. The company’s main expertise was making chatbots that were “designed to mimic real conversations, responding with empathy or humor depending on the student's needs in the moment on an individual level,” according to its website.

Michael Feldstein, a longtime edtech consultant, said that from the first time he heard about the Ed chatbot, he saw the project as too ambitious for a small startup to tackle.

“In order to do the kind of work that they were promising, they needed to gather information about students from many IT systems,” he said. “This is the well-known hard part of edtech.”

Feldstein guesses that to make a chatbot that could seamlessly take data from nearly every critical learning resource at a school, as announced at the splashy press conference in March, it could take 10 times the amount AllHere was being paid.

“There’s no evidence that they had experience as system integrators,” he said of AllHere. “It’s not clear that they had the expertise.”

In fact, a former engineer from AllHere reportedly sent emails to leaders in the school district warning that the company was not handling student data according to best practices of privacy protection, according to an article in The 74, the publication that first reported the implosion of AllHere. The official, Chris Whiteley, reportedly told state and district officials that the way the Ed chatbot handled student records put the data at risk of getting hacked. (The school district’s statement defends its privacy practices, saying that: “Throughout the development of the Ed platform, Los Angeles Unified has closely reviewed the platform to ensure compliance with applicable privacy laws and regulations, as well as Los Angeles Unified’s own data security and privacy policies, and AllHere is contractually obligated to do the same.”)

LAUSD’s data systems have recently faced breaches that appear unrelated to the Ed chatbot project. Last month hackers claimed to be selling troves of millions of records from LAUSD on the dark web for $1,000. And a data breach of a data warehouse provider used by LAUSD, Snowflake, claims to have snatched records of millions of students, including from the district. A more recent breach of Snowflake may have affected LAUSD or other tech companies it works with as well.

“LAUSD maintains an enormous amount of sensitive data. A breach of an integrated data system of LAUSD could affect a staggering number of individuals,” said Doug Levin, co-founder and national director of the K12 Security Information eXchange, in an email interview. He said he is waiting for the district to share more information about what happened. “I am mostly interested in understanding whether any of LAUSD’s edtech vendors were breached and — if so — if other customers of those vendors are at risk,” he said. “This would make it a national issue.”

Meanwhile, what happens to all the student data in the Ed chatbot?

According to the statement released by LAUSD: “Any student data belonging to the District and residing in the Ed platform will continue to be subject to the same privacy and data security protections, regardless of what happens to AllHere as a company.”

A copy of the contract between AllHere and LAUSD, obtained by EdSurge under a public records request, does indicate that all data from the project “will remain the exclusive property of LAUSD.” And the contract contains a provision stating that AllHere “shall delete a student’s covered information upon request of the district.”

Related document: Contract between LAUSD and AllHere Education.

Rob Nelson, executive director for academic technology and planning at the University of Pennsylvania, said the situation does create fresh risks, though.

“Are they taking appropriate technical steps to make sure that data is secure and there won’t be a breach or something intentional by an employee?” Nelson wondered.

Lessons Learned

James Wiley, a vice president at the education market research firm ListEdTech, said he would have advised AllHere to seek a partner with experience wrangling and managing data.

When he saw a copy of the contract between the school district and AllHere, he said his reaction was, “Why did you sign up for this?,” adding that “some of the data you would need to do this chatbot isn’t even called out in the contract.”

Wiley said that school officials may not have understood how hard it was to do the kind of data integration they were asking for. “I think a lot of times schools and colleges don’t understand how complex their data structure is,” he added. “And you’re assuming a vendor is going to come in and say, ‘It’s here and here.’” But he said it is never that simple.

“Building the Holy Grail of a data-informed, personalized achievement tool is a big job,” he added. “It’s a noble cause, but you have to realize what you have to do to get there.”

For him, the biggest lesson for other schools and colleges is to take a hard look at their data systems before launching a big AI project.

“It’s a cautionary tale,” he concluded. “AI is not going to be a silver bullet here. You’re still going to have to get your house in order before you bring AI in.”

To Nelson, of the University of Pennsylvania, the larger lesson in this unfolding saga is that it’s too soon in the development of generative AI tools to scale up one idea to a whole school district or college campus.

Instead of one multimillion-dollar bet, he said, “let’s invest $10,000 in five projects that are teacher-based, and then listen to what the teachers have to say about it and learn what these tools are going to do well.”

© Thomas Bethge / Shutterstock

An Education Chatbot Company Collapsed. Where Did the Student Data Go?

As More AI Tools Emerge in Education, so Does Concern Among Teachers About Being Replaced

When ChatGPT and other new generative AI tools emerged in late 2022, the major concern for educators was cheating. After all, students quickly spread the word on TikTok and other social media platforms that with a few simple prompts, a chatbot could write an essay or answer a homework assignment in ways that would be hard for teachers to detect.

But these days, when it comes to AI, another concern has come into the spotlight: That the technology could lead to less human interaction in schools and colleges — and that school administrators could one day try to use it to replace teachers.

And it's not just educators who are worried, this is becoming an education policy issue.

Just last week, for instance, a bill sailed through both houses of the California state legislature that aims to make sure that courses at the state’s community colleges are taught by qualified humans, not AI bots.

Sabrina Cervantes, a Democratic member of the California State Assembly, who introduced the legislation, said in a statement that the goal of the bill is to “provide guardrails on the integration of AI in classrooms while ensuring that community college students are taught by human faculty.”

To be clear, no one appears to have actually proposed replacing professors at the state’s community colleges with ChatGPT or other generative AI tools. And even the bill’s leaders say they can imagine positive uses for AI in teaching, and the bill wouldn’t stop colleges from using generative AI to help with tasks like grading or creating educational materials.

But champions of the bill also say they have reason to worry about the possibility of AI replacing professors in the future. Earlier this year, for example, a dean at Boston University sparked concern among graduate workers who were on strike seeking higher wages when he listed AI as one possible strategy for handling course discussions and other classroom activities that were impacted by the strike. Officials at the university later clarified that they had no intention of replacing any graduate workers with AI software, though.

“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”
— Wendy Brill-Wynkoop, president of the Faculty Association of California Community Colleges

While California is the furthest along, it’s not the only state where such measures are being considered. In Minnesota, Rep. Dan Wolgamott, of the Democratic-Farmer-Labor Party, proposed a bill that would forbid campuses in the Minnestate State College and University System from using AI “as the primary instructor for a credit-bearing course.” The measure has stalled for now.

Teachers in K-12 schools are also beginning to push for similar protections against AI replacing educators. The National Education Association, the country’s largest teachers union, recently put out a policy statement on the use of AI in education that stressed that human educators should “remain at the center of education.”

It’s a sign of the mixed but highly charged mood among many educators — who see both promise and potential threat in generative AI tech.

Careful Language

Even the education leaders pushing for measures to keep AI from displacing educators have gone out of their way to note that the technology could have beneficial applications in education. They're being cautious about the language they use to ensure they're not prohibiting the use of AI altogether.

The bill in California, for instance, faced initial pushback even from some supporters of the concept, out of worry about moving too soon to legislate the fast-changing technology of generative AI, says Wendy Brill-Wynkoop, president of the Faculty Association of California Community Colleges, whose group led the effort to draft the bill.

An early version of the bill explicitly stated that AI “may not be used to replace faculty for purposes of providing instruction to, and regular interaction with students in a course of instruction, and may only be used as a peripheral tool.”

Internal debate almost led leaders to spike the effort, she says. Then Brill-Wynkoop suggested a compromise: remove all explicit references to artificial intelligence from the bill’s language.

“We don’t even need the words AI in the bill, we just need to make sure humans are at the center,” she says. So the final language of the very brief proposed legislation reads: “This bill would explicitly require the instructor of record for a course of instruction to be a person who meets the above-described minimum qualifications to serve as a faculty member teaching credit instruction.”

“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”

And she admits that she doesn’t think there’s some “evil mastermind in Sacramento saying, ‘I want to get rid of these nasty faculty members.’” But, she adds, in California “education has been grossly underfunded for years, and with limited budgets, there are several tech companies right there that say, ‘How can we help you with your limited budgets by spurring efficiency.’”

Ethan Mollick, a University of Pennsylvania professor who has become a prominent voice on AI in education, wrote in his newsletter last month that he worries that many businesses and organizations are too focused on efficiency and downsizing as they rush to adopt AI technologies. Instead, he argues that leaders should be focused on finding ways to rethink how they do things to take advantage of tasks AI can do well.

He noted in his newsletter that even the companies building these new large language models haven’t yet figured out what real-world tasks they are best suited to do.

“I worry that the lesson of the Industrial Revolution is being lost in AI implementations at companies,” he wrote. “Any efficiency gains must be turned into cost savings, even before anyone in the organization figures out what AI is good for. It is as if, after getting access to the steam engine in the 1700s, every manufacturer decided to keep production and quality the same, and just fire staff in response to new-found efficiency, rather than building world-spanning companies by expanding their outputs.”

The professor wrote that his university’s new Generative AI Lab is trying to model the approach he’d like to see, where researchers work to explore evidence-based uses of AI and work to avoid what he called “downside risks,” meaning the concern that organizations might make ineffective use of AI while pushing out expert employees in the name of cutting costs. And he says the lab is committed to sharing what it learns.

Keeping Humans at the Center

AI Education Project, a nonprofit focused on AI literacy, surveyed more than 1,000 U.S. educators in 2023 about how educators feel about how AI is influencing the world, and education more specifically. In the survey, participants were asked to pick among a list of top concerns about AI and the one that bubbled to the top was that AI could lead to “a lack of human interaction.”

That could be in response to recent announcements by major AI developers — including ChatGPT creator OpenAI — about new versions of their tools that can respond to voice commands and see and respond to what students are inputting on their screens. Sal Khan, founder of Khan Academy, recently posted a video demo of him using a prototype of his organization’s chatbot Khanmigo, which has these features, to tutor his teenage son. The technology shown in the demo is not yet available, and is at least six months to a year away, according to Khan. Even so, the video went viral and sparked debate about whether any machine can fill in for a human in something as deeply personal as one-on-one tutoring.

In the meantime, many new features and products released in recent weeks focus on helping educators with administrative tasks or responsibilities like creating lesson plans and other classroom materials. And those are the kinds of behind-the-scenes uses of AI that students may never even know are happening.

That was clear in the exhibit hall of last week’s ISTE Live conference in Denver, which drew more than 15,000 educators and edtech leaders. (EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge ethics and policies here and supporters here.)

Tiny startups, tech giants and everything in between touted new features that use generative AI to support educators with a range of responsibilities, and some companies had tools to serve as a virtual classroom assistant.

Many teachers at the event weren’t actively worried about being replaced by bots.

“It’s not even on my radar, because what I bring to the classroom is something that AI cannot replicate,” said Lauren Reynolds, a third grade teacher at Riverwood Elementary School in Oklahoma City. “I have that human connection. I’m getting to know my kids on an individual basis. I’m reading more than just what they’re telling me.”

Christina Matasavage, a STEM teacher at Belton Preparatory Academy in South Carolina, said she thinks the COVID shutdowns and emergency pivots to distance learning proved that gadgets can’t step in and replace human instructors. “I think we figured out that teachers are very much needed when COVID happened and we went virtual. People figured out very [quickly] that we cannot be replaced” with tech.

© Bas Nastassia / Shutterstock

As More AI Tools Emerge in Education, so Does Concern Among Teachers About Being Replaced

What If Banning Smartphones in Schools Is Just the Beginning?

The movement to keep smartphones out of schools is gaining momentum.

Just last week, the nation’s second-largest public school system, Los Angeles Unified School District, voted to ban smartphones starting in January, citing adverse health risks of social media for kids. And the U.S. Surgeon General, Vivek Murthy, published an op-ed in The New York Times calling for warning labels on social media systems, saying “the mental health crisis among young people is an emergency.”

But some longtime teachers say that while such moves are a step in the right direction, educators need to take a more-active role in countering some negative effects of excessive social media use by students. Essentially, they should redesign assignments and how they instruct to help teach mental focus, modeling how to read, write and research away from the constant interruptions of social media and app notifications.

That’s the view of Lee Underwood, a 12th grade AP English literature and composition teacher at Millikan High School in Long Beach, California, who was the teacher of the year for his public school system in 2022.

He’s been teaching since 2006, so he remembers a time before the invention of the iPhone, Instagram or TikTok. And he says he is concerned by the change in behavior among his students, which has intensified in recent years.

“There is a lethargy that didn't exist before,” he says. “The responses of students were quicker, sharper. There was more of a willingness to engage in our conversations, and we had dynamic conversations.”

He tried to keep up his teaching style, which he feels had been working, but responses from students were different. “The last three years, four years since COVID, my jokes that I tell in my classroom have not been landing,” he says. “And they're the same jokes.”

Underwood has been avidly reading popular books and articles about the impact of smartphones on today’s young people. For instance, he read the much-talked-about book by Jonathan Haidt, “The Anxious Generation,” that has helped spark many recent efforts by schools to do more to counter the consequences of smartphones and social media.

Some have countered Haidt’s arguments, however, by pointing out that while young people face growing mental health challenges, there is little scientific evidence that social media is causing those issues. And just last month on this podcast, Ellen Galinsky, author of a book on what brain science reveals about how best to teach teens, argued that banning social media might backfire, and that kids need to learn how to regulate smartphone use on their own to prepare them for the world beyond school.

“Evidence shows very, very clearly that the ‘just say no’ approach in adolescence — where there's a need for autonomy — does not work,” she said. “In the studies on smoking, it increased smoking.”

Yet Underwood argues that he has felt the impact of social media on his concentration and focus firsthand. And these days he’s changing what he does in the classroom to bring in techniques and strategies that helped him counter the negative impacts of smartphones he experienced.

And he has a strong reaction to Galinsky’s argument.

“We don't let kids smoke in school,” he points out. “Maybe some parts of the ‘just say no campaigns’ broadly didn't work, but then no one's allowing smoking in schools.”

His hope is that the school day can be reserved as a time where students know they can get away from the downsides of smartphone and social media use.

“That's six hours of a school day where you can show a student, bring them to a kind of homeostasis, where they can see what it would be like without having that constant distraction,” he argues.

Hear the full conversation, as well as examples of how he’s redesigned his lessons, on this week’s episode. Listen on Apple Podcasts, Overcast, Spotify, or wherever you listen to podcasts, or use the player on this page.

© autumnn / Shutterstock

What If Banning Smartphones in Schools Is Just the Beginning?

Should College Become Part of High School?

Last year, when Jayla Arensberg was a sophomore at Burnsville High School near St. Paul, Minnesota, a teacher showed her a flier saying that a program at the school could save her $25,000 on college.

“I said, ‘I really need that,’” the student remembers.

She was interested in college, but worried that the cost could keep her from pursuing higher education. “College is insanely expensive,” she says.

So she applied and got accepted to the high school’s “Associate of Arts Degree Pathway,” which essentially turns junior and senior year of high school into a two-year college curriculum. All this year, Arensberg walked the halls of the same high school building and ate in the same cafeteria as before, but now most of her classes earned her college credit, and if she stays on track, she’ll get an associate degree at the same time she receives her high school diploma.

Her plan after graduation is to apply to the University of Minnesota’s main campus to major in psychology, entering halfway to her bachelor’s degree and thereby cutting out two years of paying for college.

The high school is one of a growing number around the country offering a so-called “postsecondary enrollment option,” where students can take college courses during the high school day and get college credit. In fact, the number of high school students taking at least one college course has risen to 34 percent, up from just 10 percent in 2010, according to data from the National Alliance of Concurrent Enrollment Partnerships.

But Burnsville’s program is unusual in offering a full two-year program within its building, rather than just isolated courses or transportation to nearby colleges for part of a day.

“They really are cohorted like they would probably feel in a freshman dorm,” says Rebecca Akerson, who coordinates the Associate of Arts pathway program at the school, of the students in the program, who take most of their courses together. “They’ve gotten to know each other well. When you think about college, that’s what you’re thinking about.”

It’s a stark example of how the line between high school and college is blurring for more students. While such programs may help students access college who may not have been able to before, they also raise questions about the purpose of high school, about what social opportunities might be lost, and about whether the trend pushes students to make decisions about their future careers at too young of an age.

But college is not the only option that students can get a jump on exploring at this high school. The associate degree program is part of one of four career pathways that students can choose, pointing to careers in specialties like culinary arts, manufacturing and automotive technology.

In fact, officials have gone out of their way to highlight the variety of options, to try to attract greater diversity of students to whatever they might be interested in. For instance, the school’s “fabrication lab” — which once might have been called wood shop — is located adjacent to a high-traffic commons area, and glass walls allow anyone walking by to see what the students are doing.

“This was designed very specifically because engineering and fabrication have traditionally been a very white, male-dominated career field,” says Kathy Funston, director of strategic partnerships and pathways for the Burnsville school district. “We really did want our students of color and our females to be able to look through these glass walls and say, ‘That’s cool. I like that. Nobody’s getting dirty in there. I think I want to try that,’” Funston adds. “So it’s a way to help underrepresented populations see career areas and career fields that they would not have been exposed to either in their sphere of influence at home or at other classes. If you go to a lot of other schools these types of classes have been in a remote part of the school.”

Teachers at the school say that they work to communicate these career pathway programs early and often. That means the pathway options are a big part of the tour when middle school students look at the school, and posters featuring the four main career pathways, each with its signature color, adorn hallways throughout the building.

How is the program going? And how do students feel about these options at a time of growing skepticism about higher education?

This is the fifth episode of a podcast series we’re calling Doubting College, where we’re exploring: What happened to the public belief in college? And how is that shaping the choices young people are making about what to do after high school?

Listen to the episode on Apple Podcasts, Overcast, Spotify, YouTube or wherever you listen to podcasts, or use the player on this page.

© Photo by Jeffrey R. Young for EdSurge

Should College Become Part of High School?

Latest AI Announcements Mean Another Big Adjustment for Educators

Tech giants Google, Microsoft and OpenAI have unintentionally assigned educators around the world major homework for the summer: Adjusting their assignments and teaching methods to adapt to a fresh batch of AI features that students will enter classrooms with in the fall.

Educators at both schools and colleges were already struggling to keep up with ChatGPT and other AI tools during this academic year, but a fresh round of announcements last month by major AI companies may require even greater adjustments by educators to preserve academic integrity and to accurately assess student learning, teaching experts say.

Meanwhile, educators also have scores of new edtech products to review that promise to save them time on lesson planning and administrative tasks thanks to AI.

One of the most significant changes was OpenAI’s announcement that it would make its latest generation of chatbot, which it dubbed GPT-4o, free to anyone. Previously, only an older version of the tool, GPT-3.5, was free, and people had to pay at least $20 a month to get access to the state-of-the-art model. The new model can also accept not just text, but spoken voice inputs and visual inputs, so that users can do things like share a still photo or image of their screen with the chatbot to get feedback.

“It’s a game-changing shift,” says Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing. He says that when many educators experimented with the previous free version of ChatGPT, many came away unimpressed, but the new version will be a “huge wake-up call” for how powerful the technology is, he adds.

And now that students and professors can talk to these next-generation chatbots instead of just type, there’s fresh concern that the so-called “homework apocalypse” unleashed by earlier versions of ChatGPT will get worse, as professors may find it even harder to design assignments that students can’t just have these AI bots complete for them.

“I think that’s going to really challenge what it means to be an educator this fall,” Watkins adds, noting that the changes mean that professors and teachers may not only need to change the kind of assignments they give, but they may need to rethink how they deliver material as well now that students can use AI tools to do things like summarize lecture videos for them.

And education appears to be an area identified by tech companies as a “killer application” of AI chatbots, a use case that helps drive adoption of the technology. Several demos last month by OpenAI, Google, and other companies honed in on educational uses of their latest chatbots. And just last week OpenAI unveiled a new partnership program aimed at colleges called ChatGPT Edu.

“Both Google and OpenAI are gunning for education,” says José Bowen, a longtime higher ed leader and consultant who co-wrote a new book called “Teaching with AI.” “They see this both as a great use case and also as a tremendous market.”

Changing Classes

Tech giants aren’t the only ones changing the equation for educators.

Many smaller companies have put out tools in recent months targeted at educational uses, and they are marketing them heavily on TikTok, Instagram and other social media platforms to students and teachers.

A company called Turbolearn, for instance, has pushed out a video on TikTok titled “Why I stopped taking notes during class,” which has been viewed more than 100,000 times. In it, a young woman says that she discovered a “trick” when she was a student at Harvard University. She describes opening up the company’s tool on her laptop during class and clicking a record button. “The software will automatically use your recording to make notes, flashcards and quiz questions,” she says in the promotional video.

New AI tools can make audio recordings of lectures and automatically create summaries and flashcards of the material. Some educators worry that it will keep students from paying attention.

While the company markets this as a way to free students so they can focus on listening in class, Watkins worries that skipping notetaking will mean students will tune out and not do the work of processing what they hear in a lecture.

Now that such tools are out there, Watkins suggests that professors look for more ways to do active learning in their classes, and to put more of what he called “intentional friction” in student learning so that students are forced to stop and participate or to reflect on what is being said.

“Try pausing your lecture and start having debates with your students — get into small group discussions,” he says. “Encourage students to do annotations — to read with pen or pencil or highlighter. We want to slow things down and make sure they’re pausing for a little while,” even as the advertisements for AI tools promise a way to make learning speedier and more efficient.

Slowing down is the advice that Bonni Stachowiak has for educators as well. Stachowiak, who is dean of teaching and learning at Vanguard University, points to recent advice by teaching guru James Lang to “slow walk” the use of AI in classrooms, by keeping in mind fundamental principles of teaching as educators experiment with new AI tools.

“I don’t mean resisting — I don’t think we should stick our head in the sand,” says Stachowiak. “But it’s OK to be slowly reflecting and slowly experimenting” with these new tools in classrooms, she adds. That’s especially true because keeping up with all the new AI announcements is not realistic considering all the other demands of teaching jobs.

The tools are coming fast, though.

“The maddening thing about all of this is that these tools are being deployed publicly in a grand experiment nobody asked for,” says Watkins, of the University of Mississippi. “And I know how hard it is for faculty to carve out time for anything outside of their workload.”

For that reason, he says college and school leaders need to be driving efforts to make more systematic changes in teaching and assessment. “We’re going to have to really dig in and start thinking about how we approach teaching and how students approach learning. It’s something that the entire university is going to have to think about.”

The new tools will likely mean new financial investments for schools and colleges as well.

“At some point AI is going to become the next big expense,” Bowen, the education consultant, told EdSurge.

Even though many tools are free at the moment, Bowen predicts these tools will end up costing colleges at a time when budgets are already tight.

Saving Time?

Plenty of the newest AI tools for education are aimed at educators, promising to save them time.

Several new products, for instance, allow teachers to use AI to quickly recraft worksheets, test questions and other teaching materials to change the reading level, so that a teacher could take an article from a newspaper and quickly have it revised so that younger students can better understand it.

“They will literally rewrite your words to that audience or that purpose,” says Watkins.

Such features are in several commercial products, as well as in free AI tools — just last month, the nonprofit Khan Academy announced that it would make its AI tools for teachers free to all educators.

“There’s good and bad with these things,” Watkins adds. On a positive note, such tools could greatly assist students with learning disabilities. “But the problem is when we tested this,” he adds, “it helped those students, but it got to the point where other students said, ‘I don’t have to read anything ever again,’ because the tool could also summarize and turn any text into a series of bullet points.”

Another popular feature with new AI services is to try to personalize assignments by adapting educational materials to a student’s interest, says Dan Meyer, vice president of user growth at Amplify, a curriculum and assessment company, who writes a newsletter about teaching mathematics.

Meyer worries that such tools are being overhyped, and that they may have limited effectiveness in classrooms.

“You just can't take the same dull word problems that students are doing every day and change them all to be about baseball,” he says. “Kids will wind up hating baseball, not loving math.”

He summed up his view in a recent post he titled, “Generative AI is Best at Something Teachers Need Least.

Meyer worries that many new products start with what generative AI can do and try to push out products based on that, rather than starting with what educators need and designing tools to address those challenges.

At the college level, Bowen sees potential wins for faculty in the near future, if, say, tools like learning management systems add AI features that can do tasks like build a course website after the instructor feeds it a syllabus. “That’s going to be a real time saver for faculty,” he predicts.

But teaching experts note that the biggest challenges will be finding ways to keep students learning while also preparing them for a workplace that seems to be rapidly adopting AI tools.

Bowen hopes that colleges can find a way to focus on teaching students the skills that make us most human, as AI takes over routine tasks in many white-collar industries.

“Maybe,” he says, “this time we’ll realize that the liberal arts really do matter.”

© Zapp2Photo / Shutterstock

Latest AI Announcements Mean Another Big Adjustment for Educators
❌