Reading view

There are new articles available, click to refresh the page.

How Rising Higher Ed Costs Change Student Attitudes About College

ST. PAUL, Minn. — At the end of each school year at Central High School, seniors grab a paint pen and write their post-graduation plans on a glass wall outside the counseling office.

For many, that means announcing what college they’ve enrolled in. But the goal is to celebrate whatever path students are choosing, whether at a college or not.

“We have a few people that are going to trade school, we have a few people that are going to the military, a few people who wrote ‘still deciding,’” said Lisa Beckham, a staffer for the counseling center, as she helped hand out markers in May as the school year was winding down. Others, she said, are heading straight to a job.

Talking to the students as they signed, it was clear that one factor played an outsized role in the choice: the high cost of college.

“I’m thinking about going to college in California, and my grandparents all went there for a hundred dollars a semester and went into pretty low-paying jobs, but didn't spend years in debt because it was easy to go to college,” said Maya Shapiro, a junior who was there watching the seniors write up their plans. “So now I think it is only worth going to college if you're going to get a job that's going to pay for your college tuition eventually, so if you’re going to a job in English or history you might not find a job that’s going to pay that off.”

When I told her I was an English major back in my own college years, she quickly said, “I’m sorry.”

Even students going to some of the most well-known colleges are mindful of cost.

Harlow Tong, who was recruited by Harvard University to run track, said he had planned to go to the University of Minnesota and is still processing his decision to join the Ivy League.

“After the decision it really hit me that it's really an investment, and every year it feels like it's getting less and less worth the cost,” he said.

A new book lays out the changing forces shaping what students are choosing after high school, and argues for a change in the popular narrative around higher education.

The book is called “Rethinking College,” by longtime journalist and Los Angeles Times opinion writer Karin Klein. She calls for an end to “degree inflation,” where jobs require a college degree even if someone without a degree could do the job just as well. And she advocates for more high school graduates to take gap years to find out what they want to do before enrolling in college, or to seek out apprenticeships in fields that may not need college.

But she admits the issue is complicated. She said one of her own daughters, who is now 26, would have benefitted from a gap year. “The problem was the cost was a major factor,” Klein told me. “She was offered huge financial aid by a very good school, and I said, ‘We don’t know if you take a gap year if that offer is going to be on the table. And I can’t afford this school without that offer.’”

Hear more from Klein, including about programs she sees as models for new post-grad options, as well as from students at Central High School, on this week’s EdSurge Podcast. Check it out on Spotify, Apple Podcasts, or on the player below. It’s the latest episode of our Doubting College podcast series.

Get episode reminders and show notes in your inbox. Sign up for the EdSurge Podcast newsletter.

© Photo by Jeffrey R. Young for EdSurge

How Rising Higher Ed Costs Change Student Attitudes About College

How Is Axim Collaborative Spending $800 Million From the Sale of EdX?

One of the country’s richest nonprofits focused on online education has been giving out grants for more than a year. But so far, the group, known as Axim Collaborative, has done so slowly — and pretty quietly.

“There has been little buzz about them in digital learning circles,” says Russ Poulin, executive director of WCET, a nonprofit focused on digital learning in higher education. “They are not absent from the conversation, but their name is not raised very often.”

Late last month, an article in the online course review site Class Central put it more starkly, calling the promise of the nonprofit “hollow.” The op-ed, by longtime online education watcher Dhawal Shah, noted that according to the group’s most recent tax return, Axim is sitting on $735 million and had expenses of just $9 million in tax year 2023, with $15 million in revenue from investment income. “Instead of being an innovator, Axim Collaborative seems to be a non-entity in the edtech space, its promises of innovation and equity advancement largely unfulfilled,” Shah wrote.

The group was formed with the money made when Harvard University and MIT sold their edX online platform to for-profit company 2U in 2021 for about $800 million. At the time many online learning leaders criticized the move, since edX had long touted its nonprofit status as differentiating it from competitors like Coursera. The purchase did not end up working out as planned for 2U, which this summer filed for bankruptcy.

So what is Axim investing in? And what are its future plans?

EdSurge reached out to Axim’s CEO, Stephanie Khurana, to get an update.

Not surprisingly, she pushed back on the idea that the group is not doing much.

“We’ve launched 18 partnerships over the past year,” she says, noting that many grants Axim has awarded were issued since its most recent tax return was filed. “It’s a start, and it’s seeding a lot of innovations. And that to me is very powerful.”

One of the projects she says she is most proud of is Axim’s work with HBCUv, a collaboration by several historically Black colleges to create a shared technology platform and framework to share online courses across their campuses. While money was part of that, Khurana says she is also proud of the work her group did helping set up a course-sharing framework. Axim also plans to help with “incorporating student success metrics in the platform itself,” she says, “so people can see where they might be able to support students with different kinds of advising and different kinds of student supports.”

The example embodies the group’s philosophy of trying to provide expertise and convening power, rather than just cash, to help promising ideas scale to support underserved learners in higher education.

Listening Tour

When EdSurge talked with Khurana last year, she stressed that her first step would be to listen and learn across the online learning community to see where the group could best make a difference.

One thing that struck her as she did that, she says, is “hearing what barriers students are facing, and what's keeping them from persisting through their programs and finding jobs that match with their skills and being able to actually realize better outcomes.”

Grant amounts the group has given out so far range from around $500,000 for what she called “demonstration projects” to as much as $3 million.

Artificial intelligence has emerged as a key focus of Axim’s work, though Khurana says the group is treading gingerly.

“We are looking very carefully at how and where AI is beneficial, and where it might be problematic, especially for these underserved learners,” she says. “And so trying to be clear-eyed about what those possibilities are, and then bring to bear the most promising opportunities for the students and institutions that we're supporting.”

One specific AI project the group has supported is a collaboration between Axim, Campus Evolve, University of Central Florida and Indiana Tech to explore research-based approaches to using AI to improve student advising. “They're developing an AI tool to have a student-facing approach to understanding, ‘What are my academic resources? What are career-based resources?,’” she says. “A lot of times those are hard to discern.”

Another key work of Axim involves keeping up an old system rather than starting new ones. The Axim Collaborative manages the Open edX platform, the open-source system that hosts edX courses and can also be used by any institution with the tech know-how and the computer servers to run it. The platform is used by thousands of colleges and organizations around the world, including a growing number of governments, who use it to offer online courses.

Anant Agarwal, who helped found edX and now works at 2U to coordinate its use, is also on a technical committee for Open edX.

He says the structure of supporting Open edX through Axim is modeled on the way the Linux open-source operating system is managed.

While edX continues to rely on the platform, the software is community-run. “There has to be somebody that maintains the repositories, maintains the release schedule and provides funding for certain projects,” Agarwal says. And that group is now Axim.

When the war in Ukraine broke out, Agarwal says, the country “turned on a dime and the universities and schools started offering courses on Open edX.”

Poulin, of WCET, says that it’s too early to say whether Axim’s model is working.

“While their profile and impact may not be great to this point, I am willing to give startups some runway time to determine if they will take off,” he says, noting that “Axim is, essentially, still a startup.”

His advice: “A creative, philanthropic organization should take some risks if they are working in the ‘innovation’ sphere. We learn as much from failures as successes.”

For Khurana, Axim’s CEO, the goal is not to find a magic answer to deep-seated problems facing higher education.

“I know some people want something that will be a silver bullet,” she says. “And I think it's just hard to come by in a space where there's a lot of different ways to solve problems. Starting with people on the ground who are doing the work — [with] humility — is probably one of the best ways to seed innovations and to start.”

© Mojahid Mottakin / Shutterstock

How Is Axim Collaborative Spending $800 Million From the Sale of EdX?

When the Teaching Assistant Is an AI ‘Twin’ of the Professor

Two instructors at Vilnius University in Lithuania brought in some unusual teaching assistants earlier this year: AI chatbot versions of themselves.

The instructors — Paul Jurcys and Goda Strikaitė-Latušinskaja — created AI chatbots trained only on academic publications, PowerPoint slides and other teaching materials that they had created over the years. And they called these chatbots “AI Knowledge Twins,” dubbing one Paul AI and the other Goda AI.

They told their students to take any questions they had during class or while doing their homework to the bots first before approaching the human instructors. The idea wasn’t to discourage asking questions, but rather to nudge students to try out the chatbot doubles.


Would you use an AI teaching assistant? Share your thoughts.


“We introduced them as our assistants — as our research assistants that help people interact with our knowledge in a new and unique way,” says Jurcys.

Experts in artificial intelligence have for years experimented with the idea of creating chatbots that can fill this support role in classrooms. With the rise of ChatGPT and other generative AI tools, there’s a new push to try robot TAs.

“From a faculty perspective, especially someone who is overwhelmed with teaching and needs a teaching assistant, that's very attractive to them — then they can focus on research and not focus on teaching,” says Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing.

But just because Watkins thought some faculty would like it doesn’t mean he thinks it’s a good idea.

“That's exactly why it's so dangerous too, because it basically offloads this sort of human relationships that we're trying to develop with our students and between teachers and students to an algorithm,” he says.

On this week’s EdSurge Podcast, we hear from these professors about how the experiment went — how it changed classroom discussion but sometimes caused distraction. A student in the class, Maria Ignacia, also shares her view on what it was like to have chatbot TAs.

And we listen in as Jurcys asks his chatbot questions — and admits the bot puts things a bit differently than he would.

Listen to the episode on Spotify, Apple Podcasts, or on the player on this page.

When the Teaching Assistant Is an AI ‘Twin’ of the Professor

Not All ‘Free College’ Programs Spark Increased Enrollments or More Degrees

The premise of “free college” programs popping up around the country in recent years is that bringing the price of higher education down to nearly nothing will spur more students to enroll and earn degrees.

But is that what actually happens?

David Monaghan, an associate professor of sociology at Shippensburg University of Pennsylvania, has been digging into that question in a series of recent research studies. And the results indicate that not all of these free college programs have the intended effect — and that how a program is set up can make a big difference.

In a working paper the professor co-authored that was released last month, for instance, Monaghan compared two free college programs in Pennsylvania to dig into their outcomes.

One of the programs is the Morgan Success Scholarship at Lehigh Carbon Community College, which is available to students at Tamaqua Area High School who enroll right after completing their high school degree. Qualifying students are guaranteed fully paid tuition, with the program paying any gap left after the student applies for other financial aid and scholarships (a model known as a “last dollar, tuition-only guarantee.”)

The other is the Community College of Philadelphia’s 50th Anniversary Scholars Program, which is available to students who graduate from a high school in Philadelphia and meet other merit criteria. It is also a “last dollar” program that covers any tuition and fees not paid from other sources. The students must enroll immediately after high school graduation, have a low enough income to qualify for a federal Pell scholarship, file their application for federal financial aid by a set date and enroll in at least six credits at the college.

The Morgan Success Scholarship seemed to work largely as its designers hoped. The year after the program started, the rate of college-going at Tamaqua Area High School jumped from 86 percent to 94 percent, and college-going increased another percentage point the following year. And the number of students graduating from Lehigh Carbon Community College with a two-year degree increased after the program was created.

But something else happened that wasn’t by design. The free-college program appears to have led some students who would have enrolled in a four-year college to instead start at the two-year college — where they may or may not end up going on to a four-year institution. There is a chance, then, that the program may end up keeping some students from finishing a four-year degree. “On balance, the program expands access to postsecondary education more than it diverts students away from four-year degrees, though it does appear to do this as well,” the paper asserts.

The free-college program at Community College of Philadelphia, meanwhile, didn’t seem to move the needle much at all.

“I expected to see an enrollment boost, and I didn’t even see that,” says Monaghan.

In other words, it isn’t even clear from the data that the free college effort sparked any increase in enrollment at the college.

The reason, he says, may be that the leaders of the program did not do enough to spread awareness about the option, and about what it takes to apply. Since the program was open to all high schools in the city, doing that communication was more difficult than in the case of the other program they studied.

“Our analyses suggest that a tuition guarantee, by itself, will not necessarily have any impact,” he and his co-author wrote in their paper. “If a program falls in the forest and no one hears it, it will not shift enrollment patterns.”

Monaghan says that the findings show that more attention should be paid to the details of how free college programs work — especially since many of them are full of restrictions and require students to jump through a series of hoops to take advantage of them. That can be a lot to ask a 17- or 18-year-old finishing high school to navigate.

“We really overestimate what people are like at the end of high school,” and how savvy they’ll be about weighing the costs and benefits of higher education, he argues. “There hasn’t been enough research on free college programs in terms of how they are implemented and communicated,” he adds.

It’s worth noting, of course, that some free college programs do significantly increase enrollment. And that can create another unintended side effect: straining resources at two-year colleges.

That was the case in Massachusetts, where the MassReconnect program that launched in 2023 led more than 5,000 new students to enroll the first semester it was available, according to a report from the Massachusetts Department of Higher Education.

As a result, the state’s 15 community colleges have struggled to hire enough staff — including adjunct instructors — to keep up with the new demand.

What did that program do to spark so much interest? Unlike the programs studied in Pennsylvania, MassReconnect is available to not just people freshly graduating high school, but to anyone over 25 years old — a much larger pool of possible takers.

Another working paper by Monaghan, which looked at as much available research as he could find on free college programs, found a large variety of impact.

And that may be the biggest lesson: For free college programs, the devil really is in the details of how they are set up and communicated.

© Robert Reppert / Shutterstock

Not All ‘Free College’ Programs Spark Increased Enrollments or More Degrees

Should Educators Put Disclosures on Teaching Materials When They Use AI?

Many teachers and professors are spending time this summer experimenting with AI tools to help them prepare slide presentations, craft tests and homework questions, and more. That’s in part because of a huge batch of new tools and updated features that incorporate ChatGPT, which companies have released in recent weeks.

As more instructors experiment with using generative AI to make teaching materials, an important question bubbles up. Should they disclose that to students?

It’s a fair question given the widespread concern in the field about students using AI to write their essays or bots to do their homework for them. If students are required to make clear when and how they’re using AI tools, should educators be too?

When Marc Watkins heads back into the classroom this fall to teach a digital media studies course, he plans to make clear to students how he’s now using AI behind the scenes in preparing for classes. Watkins is a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing, an optional program for faculty.

“We need to be open and honest and transparent if we’re using AI,” he says. “I think it’s important to show them how to do this, and how to model this behavior going forward,” Watkins says.

While it may seem logical for teachers and professors to clearly disclose when they use AI to develop instructional materials, just as they are asking students to do in assignments, Watkins points out that it’s not as simple as it might seem. At colleges and universities, there's a culture of professors grabbing materials from the web without always citing them. And he says K-12 teachers frequently use materials from a range of sources including curriculum and textbooks from their schools and districts, resources they’ve gotten from colleagues or found on websites, and materials they’ve purchased from marketplaces such as Teachers Pay Teachers. But teachers rarely share with students where these materials come from.

Watkins says that a few months ago, when he saw a demo of a new feature in a popular learning management system that uses AI to help make materials with one click, he asked a company official whether they could add a button that would automatically watermark when AI is used to make that clear to students.

The company wasn’t receptive, though, he says: “The impression I've gotten from the developers — and this is what's so maddening about this whole situation — is they basically are like, well, ‘Who cares about that?’”

Many educators seem to agree: In a recent survey conducted by Education Week, about 80 percent of the K-12 teachers who responded said it isn’t necessary to tell students and parents when they use AI to plan lessons and most educator respondents said that also applied to designing assessments and tracking behavior. In open-ended answers, some educators said they see it as a tool akin to a calculator, or like using content from a textbook.

But many experts say it depends on what a teacher is doing with AI. For example, an educator may decide to skip a disclosure when they do something like use a chatbot to improve the draft of a text or slide, but they may want to make it clear if they use AI to do something like help grade assignments.

So as teachers are learning to use generative AI tools themselves, they’re also wrestling with when and how to communicate what they’re trying.

Leading By Example

For Alana Winnick, educational technology director at Pocantico Hills Central School District in Sleepy Hollow, New York, it’s important to make it clear to colleagues when she uses generative AI in a way that is new — and which people may not even realize is possible.

For instance, when she first started using the technology to help her compose email messages to staff members, she included a line at the end stating: “Written in collaboration with artificial intelligence.” That’s because she had turned to an AI chatbot to ask it for ideas to make her message “more creative and engaging,” she explains, and then she “tweaked” the result to make the message her own. She imagines teachers might use AI in the same way to create assignments or lesson plans. “No matter what, the thoughts need to start with the human user and end with the human user,” she stresses.

But Winnick, who wrote a book on AI in education called “The Generative Age: Artificial Intelligence and the Future of Education” and hosts a podcast by the same name, thinks putting in that disclosure note is temporary, not some fundamental ethical requirement, since she thinks this kind of AI use will become routine. “I don’t think [that] 10 years from now you’ll have to do that,” she says. “I did it to raise awareness and normalize [it] and encourage it — and say, ‘It’s ok.’”

To Jane Rosenzweig, director of the Harvard College Writing Center at Harvard University, whether or not to add a disclosure would depend on the way a teacher is using AI.

“These are totally different things,” he says. “As a student, you’re submitting your thing as a grade to be evaluated. The teachers, they know how to do it. They’re just making their work more efficient.”
— Pat Yongpradit, chief academic officer for Code.org and the leader of TeachAI

“If an instructor was to use ChatGPT to generate writing feedback, I would absolutely expect them to tell students they are doing that,” she says. After all, the goal of any writing instruction, she notes, is to help “two human beings communicate with each other.” When she grades a student paper, Rosenzweig says she assumes the text was written by the student unless otherwise noted, and she imagines that her students expect any feedback they get to be from the human instructor, unless they are told otherwise.

When EdSurge posed the question of whether teachers and professors should disclose when they’re using AI to create instructional materials to readers of our higher ed newsletter, a few readers replied that they saw doing so as important — as a teachable moment for students, and for themselves.

“If we're using it simply to help with brainstorming, then it might not be necessary,” said Katie Datko, director of distance learning and instructional technology at Mt. San Antonio College. “But if we're using it as a co-creator of content, then we should apply the developing norms for citing AI-generated content.”

Seeking Policy Guidance

Since the release of ChatGPT, many schools and colleges have rushed to create policies on the appropriate use of AI.

But most of those policies don’t address the question of whether educators should tell students how they’re using new generative AI tools, says Pat Yongpradit, chief academic officer for Code.org and the leader of TeachAI, a consortium of several education groups working to develop and share guidance for educators about AI. (EdSurge is an independent newsroom that shares a parent organization with ISTE, which is involved in the consortium. Learn more about EdSurge ethics and policies here and supporters here.)

A toolkit for schools released by TeachAI recommends that: “If a teacher or student uses an AI system, its use must be disclosed and explained.”

But Yongpradit says that his personal view is that “it depends” on what kind of AI use is involved. If AI is just helping to write an email, he explains, or even part of a lesson plan, that might not require disclosure. But there are other activities he says are more core to teaching where disclosure should be made, like when AI grading tools are used.

Even if an educator decides to cite an AI chatbot, though, the mechanics can be tricky, Yongpradit says. While there are major organizations including the Modern Language Association and the American Psychological Association that have issued guidelines on citing generative AI, he says the approaches remain clunky.

“That’s like pouring new wine into old wineskins,” he says, “because it takes a past paradigm for taking and citing source material and puts it toward a tool that doesn’t work the same way. Stuff before involved humans and was static. AI is just weird to fit it in that model because AI is a tool, not a source.”

For instance, the output of an AI chatbot depends greatly on how a prompt is worded. And most chatbots give a slightly different answer every time, even if the same exact prompt is used.

Yongpradit says he was recently attending a panel discussion where an educator urged teachers to disclose AI use since they are asking their students to do so, garnering cheers from students in attendance. But to Yongpradit, those situations are hardly equivalent.

“These are totally different things,” he says. “As a student, you’re submitting your thing as a grade to be evaluated. The teachers, they know how to do it. They’re just making their work more efficient.”

That said, “if the teacher is publishing it and putting it on Teachers Pay Teachers, then yes, they should disclose it,” he adds.

The important thing, he says, will be for states, districts and other educational institutions to develop policies of their own, so the rules of the road are clear.

“With a lack of guidance, you have a Wild West of expectations.”

© Leonardo Santtos / Shutterstock

Should Educators Put Disclosures on Teaching Materials When They Use AI?

An Education Chatbot Company Collapsed. Where Did the Student Data Go?

When Los Angeles Unified School District launched a districtwide AI chatbot nicknamed “Ed” in March, officials boasted that it represented a revolutionary new tool that was only possible thanks to generative AI — a personal assistant that could point each student to tailored resources and assignments and playfully nudge and encourage them to keep going.

But last month, just a few months after the fanfare of the public launch event, the district abruptly shut down its Ed chatbot, after the company it contracted to build the system, AllHere Education, suddenly furloughed most of its staff citing financial difficulties. The company had raised more than $12 million in venture capital, and its five-year contract with the LA district was for about $6 million over five years, about half of which the company had already been paid.

It’s not yet clear what happened: LAUSD officials declined interview requests from EdSurge, and officials from AllHere did not respond to requests for comment about the company’s future. A statement issued by the school district said “several educational technology companies are interested in acquiring” AllHere to continue its work, though nothing concrete has been announced.

A tech leader for the school district, which is the nation’s second-largest, told the Los Angeles Times that some information in the Ed system is still available to students and families, just not in chatbot form. But it was the chatbot that was touted as the key innovation — which relied on human moderators at AllHere to monitor some of the chatbot’s output who are no longer actively working on the project.

Some edtech experts contacted by EdSurge say that the implosion of the cutting-edge AI tool offers lessons for other schools and colleges working to make use of generative AI. Most of those lessons, they say, center on a factor that is more difficult than many people realize: the challenges of corralling and safeguarding data.

An Ambitious Attempt to Link Systems

When leaders from AllHere gave EdSurge a demo of the Ed chatbot in March, back when the company seemed thriving and had recently been named to a Time magazine list of the “World’s Top Edtech Companies of 2024,” company leaders were most proud of how the chatbot cut across dozens of tech tools that the school system uses.

“The first job of Ed was, how do you create one unified learning space that brings together all the digital tools, and that eliminates the high number of clicks that otherwise the student would need to navigate through them all?” the company’s then-CEO, Joanna Smith-Griffin, said at the time. (The LAUSD statement said she is no longer with the company.)

Such data integration had not previously been a focus of the company, though. The company’s main expertise was making chatbots that were “designed to mimic real conversations, responding with empathy or humor depending on the student's needs in the moment on an individual level,” according to its website.

Michael Feldstein, a longtime edtech consultant, said that from the first time he heard about the Ed chatbot, he saw the project as too ambitious for a small startup to tackle.

“In order to do the kind of work that they were promising, they needed to gather information about students from many IT systems,” he said. “This is the well-known hard part of edtech.”

Feldstein guesses that to make a chatbot that could seamlessly take data from nearly every critical learning resource at a school, as announced at the splashy press conference in March, it could take 10 times the amount AllHere was being paid.

“There’s no evidence that they had experience as system integrators,” he said of AllHere. “It’s not clear that they had the expertise.”

In fact, a former engineer from AllHere reportedly sent emails to leaders in the school district warning that the company was not handling student data according to best practices of privacy protection, according to an article in The 74, the publication that first reported the implosion of AllHere. The official, Chris Whiteley, reportedly told state and district officials that the way the Ed chatbot handled student records put the data at risk of getting hacked. (The school district’s statement defends its privacy practices, saying that: “Throughout the development of the Ed platform, Los Angeles Unified has closely reviewed the platform to ensure compliance with applicable privacy laws and regulations, as well as Los Angeles Unified’s own data security and privacy policies, and AllHere is contractually obligated to do the same.”)

LAUSD’s data systems have recently faced breaches that appear unrelated to the Ed chatbot project. Last month hackers claimed to be selling troves of millions of records from LAUSD on the dark web for $1,000. And a data breach of a data warehouse provider used by LAUSD, Snowflake, claims to have snatched records of millions of students, including from the district. A more recent breach of Snowflake may have affected LAUSD or other tech companies it works with as well.

“LAUSD maintains an enormous amount of sensitive data. A breach of an integrated data system of LAUSD could affect a staggering number of individuals,” said Doug Levin, co-founder and national director of the K12 Security Information eXchange, in an email interview. He said he is waiting for the district to share more information about what happened. “I am mostly interested in understanding whether any of LAUSD’s edtech vendors were breached and — if so — if other customers of those vendors are at risk,” he said. “This would make it a national issue.”

Meanwhile, what happens to all the student data in the Ed chatbot?

According to the statement released by LAUSD: “Any student data belonging to the District and residing in the Ed platform will continue to be subject to the same privacy and data security protections, regardless of what happens to AllHere as a company.”

A copy of the contract between AllHere and LAUSD, obtained by EdSurge under a public records request, does indicate that all data from the project “will remain the exclusive property of LAUSD.” And the contract contains a provision stating that AllHere “shall delete a student’s covered information upon request of the district.”

Related document: Contract between LAUSD and AllHere Education.

Rob Nelson, executive director for academic technology and planning at the University of Pennsylvania, said the situation does create fresh risks, though.

“Are they taking appropriate technical steps to make sure that data is secure and there won’t be a breach or something intentional by an employee?” Nelson wondered.

Lessons Learned

James Wiley, a vice president at the education market research firm ListEdTech, said he would have advised AllHere to seek a partner with experience wrangling and managing data.

When he saw a copy of the contract between the school district and AllHere, he said his reaction was, “Why did you sign up for this?,” adding that “some of the data you would need to do this chatbot isn’t even called out in the contract.”

Wiley said that school officials may not have understood how hard it was to do the kind of data integration they were asking for. “I think a lot of times schools and colleges don’t understand how complex their data structure is,” he added. “And you’re assuming a vendor is going to come in and say, ‘It’s here and here.’” But he said it is never that simple.

“Building the Holy Grail of a data-informed, personalized achievement tool is a big job,” he added. “It’s a noble cause, but you have to realize what you have to do to get there.”

For him, the biggest lesson for other schools and colleges is to take a hard look at their data systems before launching a big AI project.

“It’s a cautionary tale,” he concluded. “AI is not going to be a silver bullet here. You’re still going to have to get your house in order before you bring AI in.”

To Nelson, of the University of Pennsylvania, the larger lesson in this unfolding saga is that it’s too soon in the development of generative AI tools to scale up one idea to a whole school district or college campus.

Instead of one multimillion-dollar bet, he said, “let’s invest $10,000 in five projects that are teacher-based, and then listen to what the teachers have to say about it and learn what these tools are going to do well.”

© Thomas Bethge / Shutterstock

An Education Chatbot Company Collapsed. Where Did the Student Data Go?

As More AI Tools Emerge in Education, so Does Concern Among Teachers About Being Replaced

When ChatGPT and other new generative AI tools emerged in late 2022, the major concern for educators was cheating. After all, students quickly spread the word on TikTok and other social media platforms that with a few simple prompts, a chatbot could write an essay or answer a homework assignment in ways that would be hard for teachers to detect.

But these days, when it comes to AI, another concern has come into the spotlight: That the technology could lead to less human interaction in schools and colleges — and that school administrators could one day try to use it to replace teachers.

And it's not just educators who are worried, this is becoming an education policy issue.

Just last week, for instance, a bill sailed through both houses of the California state legislature that aims to make sure that courses at the state’s community colleges are taught by qualified humans, not AI bots.

Sabrina Cervantes, a Democratic member of the California State Assembly, who introduced the legislation, said in a statement that the goal of the bill is to “provide guardrails on the integration of AI in classrooms while ensuring that community college students are taught by human faculty.”

To be clear, no one appears to have actually proposed replacing professors at the state’s community colleges with ChatGPT or other generative AI tools. And even the bill’s leaders say they can imagine positive uses for AI in teaching, and the bill wouldn’t stop colleges from using generative AI to help with tasks like grading or creating educational materials.

But champions of the bill also say they have reason to worry about the possibility of AI replacing professors in the future. Earlier this year, for example, a dean at Boston University sparked concern among graduate workers who were on strike seeking higher wages when he listed AI as one possible strategy for handling course discussions and other classroom activities that were impacted by the strike. Officials at the university later clarified that they had no intention of replacing any graduate workers with AI software, though.

“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”
— Wendy Brill-Wynkoop, president of the Faculty Association of California Community Colleges

While California is the furthest along, it’s not the only state where such measures are being considered. In Minnesota, Rep. Dan Wolgamott, of the Democratic-Farmer-Labor Party, proposed a bill that would forbid campuses in the Minnestate State College and University System from using AI “as the primary instructor for a credit-bearing course.” The measure has stalled for now.

Teachers in K-12 schools are also beginning to push for similar protections against AI replacing educators. The National Education Association, the country’s largest teachers union, recently put out a policy statement on the use of AI in education that stressed that human educators should “remain at the center of education.”

It’s a sign of the mixed but highly charged mood among many educators — who see both promise and potential threat in generative AI tech.

Careful Language

Even the education leaders pushing for measures to keep AI from displacing educators have gone out of their way to note that the technology could have beneficial applications in education. They're being cautious about the language they use to ensure they're not prohibiting the use of AI altogether.

The bill in California, for instance, faced initial pushback even from some supporters of the concept, out of worry about moving too soon to legislate the fast-changing technology of generative AI, says Wendy Brill-Wynkoop, president of the Faculty Association of California Community Colleges, whose group led the effort to draft the bill.

An early version of the bill explicitly stated that AI “may not be used to replace faculty for purposes of providing instruction to, and regular interaction with students in a course of instruction, and may only be used as a peripheral tool.”

Internal debate almost led leaders to spike the effort, she says. Then Brill-Wynkoop suggested a compromise: remove all explicit references to artificial intelligence from the bill’s language.

“We don’t even need the words AI in the bill, we just need to make sure humans are at the center,” she says. So the final language of the very brief proposed legislation reads: “This bill would explicitly require the instructor of record for a course of instruction to be a person who meets the above-described minimum qualifications to serve as a faculty member teaching credit instruction.”

“Our intent was not to put a giant brick wall in front of AI,” Brill-Wynkoop says. “That’s nuts. It’s a fast-moving train. We’re not against tech, but the question is ‘How do we use it thoughtfully?’”

And she admits that she doesn’t think there’s some “evil mastermind in Sacramento saying, ‘I want to get rid of these nasty faculty members.’” But, she adds, in California “education has been grossly underfunded for years, and with limited budgets, there are several tech companies right there that say, ‘How can we help you with your limited budgets by spurring efficiency.’”

Ethan Mollick, a University of Pennsylvania professor who has become a prominent voice on AI in education, wrote in his newsletter last month that he worries that many businesses and organizations are too focused on efficiency and downsizing as they rush to adopt AI technologies. Instead, he argues that leaders should be focused on finding ways to rethink how they do things to take advantage of tasks AI can do well.

He noted in his newsletter that even the companies building these new large language models haven’t yet figured out what real-world tasks they are best suited to do.

“I worry that the lesson of the Industrial Revolution is being lost in AI implementations at companies,” he wrote. “Any efficiency gains must be turned into cost savings, even before anyone in the organization figures out what AI is good for. It is as if, after getting access to the steam engine in the 1700s, every manufacturer decided to keep production and quality the same, and just fire staff in response to new-found efficiency, rather than building world-spanning companies by expanding their outputs.”

The professor wrote that his university’s new Generative AI Lab is trying to model the approach he’d like to see, where researchers work to explore evidence-based uses of AI and work to avoid what he called “downside risks,” meaning the concern that organizations might make ineffective use of AI while pushing out expert employees in the name of cutting costs. And he says the lab is committed to sharing what it learns.

Keeping Humans at the Center

AI Education Project, a nonprofit focused on AI literacy, surveyed more than 1,000 U.S. educators in 2023 about how educators feel about how AI is influencing the world, and education more specifically. In the survey, participants were asked to pick among a list of top concerns about AI and the one that bubbled to the top was that AI could lead to “a lack of human interaction.”

That could be in response to recent announcements by major AI developers — including ChatGPT creator OpenAI — about new versions of their tools that can respond to voice commands and see and respond to what students are inputting on their screens. Sal Khan, founder of Khan Academy, recently posted a video demo of him using a prototype of his organization’s chatbot Khanmigo, which has these features, to tutor his teenage son. The technology shown in the demo is not yet available, and is at least six months to a year away, according to Khan. Even so, the video went viral and sparked debate about whether any machine can fill in for a human in something as deeply personal as one-on-one tutoring.

In the meantime, many new features and products released in recent weeks focus on helping educators with administrative tasks or responsibilities like creating lesson plans and other classroom materials. And those are the kinds of behind-the-scenes uses of AI that students may never even know are happening.

That was clear in the exhibit hall of last week’s ISTE Live conference in Denver, which drew more than 15,000 educators and edtech leaders. (EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge ethics and policies here and supporters here.)

Tiny startups, tech giants and everything in between touted new features that use generative AI to support educators with a range of responsibilities, and some companies had tools to serve as a virtual classroom assistant.

Many teachers at the event weren’t actively worried about being replaced by bots.

“It’s not even on my radar, because what I bring to the classroom is something that AI cannot replicate,” said Lauren Reynolds, a third grade teacher at Riverwood Elementary School in Oklahoma City. “I have that human connection. I’m getting to know my kids on an individual basis. I’m reading more than just what they’re telling me.”

Christina Matasavage, a STEM teacher at Belton Preparatory Academy in South Carolina, said she thinks the COVID shutdowns and emergency pivots to distance learning proved that gadgets can’t step in and replace human instructors. “I think we figured out that teachers are very much needed when COVID happened and we went virtual. People figured out very [quickly] that we cannot be replaced” with tech.

© Bas Nastassia / Shutterstock

As More AI Tools Emerge in Education, so Does Concern Among Teachers About Being Replaced

What If Banning Smartphones in Schools Is Just the Beginning?

The movement to keep smartphones out of schools is gaining momentum.

Just last week, the nation’s second-largest public school system, Los Angeles Unified School District, voted to ban smartphones starting in January, citing adverse health risks of social media for kids. And the U.S. Surgeon General, Vivek Murthy, published an op-ed in The New York Times calling for warning labels on social media systems, saying “the mental health crisis among young people is an emergency.”

But some longtime teachers say that while such moves are a step in the right direction, educators need to take a more-active role in countering some negative effects of excessive social media use by students. Essentially, they should redesign assignments and how they instruct to help teach mental focus, modeling how to read, write and research away from the constant interruptions of social media and app notifications.

That’s the view of Lee Underwood, a 12th grade AP English literature and composition teacher at Millikan High School in Long Beach, California, who was the teacher of the year for his public school system in 2022.

He’s been teaching since 2006, so he remembers a time before the invention of the iPhone, Instagram or TikTok. And he says he is concerned by the change in behavior among his students, which has intensified in recent years.

“There is a lethargy that didn't exist before,” he says. “The responses of students were quicker, sharper. There was more of a willingness to engage in our conversations, and we had dynamic conversations.”

He tried to keep up his teaching style, which he feels had been working, but responses from students were different. “The last three years, four years since COVID, my jokes that I tell in my classroom have not been landing,” he says. “And they're the same jokes.”

Underwood has been avidly reading popular books and articles about the impact of smartphones on today’s young people. For instance, he read the much-talked-about book by Jonathan Haidt, “The Anxious Generation,” that has helped spark many recent efforts by schools to do more to counter the consequences of smartphones and social media.

Some have countered Haidt’s arguments, however, by pointing out that while young people face growing mental health challenges, there is little scientific evidence that social media is causing those issues. And just last month on this podcast, Ellen Galinsky, author of a book on what brain science reveals about how best to teach teens, argued that banning social media might backfire, and that kids need to learn how to regulate smartphone use on their own to prepare them for the world beyond school.

“Evidence shows very, very clearly that the ‘just say no’ approach in adolescence — where there's a need for autonomy — does not work,” she said. “In the studies on smoking, it increased smoking.”

Yet Underwood argues that he has felt the impact of social media on his concentration and focus firsthand. And these days he’s changing what he does in the classroom to bring in techniques and strategies that helped him counter the negative impacts of smartphones he experienced.

And he has a strong reaction to Galinsky’s argument.

“We don't let kids smoke in school,” he points out. “Maybe some parts of the ‘just say no campaigns’ broadly didn't work, but then no one's allowing smoking in schools.”

His hope is that the school day can be reserved as a time where students know they can get away from the downsides of smartphone and social media use.

“That's six hours of a school day where you can show a student, bring them to a kind of homeostasis, where they can see what it would be like without having that constant distraction,” he argues.

Hear the full conversation, as well as examples of how he’s redesigned his lessons, on this week’s episode. Listen on Apple Podcasts, Overcast, Spotify, or wherever you listen to podcasts, or use the player on this page.

© autumnn / Shutterstock

What If Banning Smartphones in Schools Is Just the Beginning?

Should College Become Part of High School?

Last year, when Jayla Arensberg was a sophomore at Burnsville High School near St. Paul, Minnesota, a teacher showed her a flier saying that a program at the school could save her $25,000 on college.

“I said, ‘I really need that,’” the student remembers.

She was interested in college, but worried that the cost could keep her from pursuing higher education. “College is insanely expensive,” she says.

So she applied and got accepted to the high school’s “Associate of Arts Degree Pathway,” which essentially turns junior and senior year of high school into a two-year college curriculum. All this year, Arensberg walked the halls of the same high school building and ate in the same cafeteria as before, but now most of her classes earned her college credit, and if she stays on track, she’ll get an associate degree at the same time she receives her high school diploma.

Her plan after graduation is to apply to the University of Minnesota’s main campus to major in psychology, entering halfway to her bachelor’s degree and thereby cutting out two years of paying for college.

The high school is one of a growing number around the country offering a so-called “postsecondary enrollment option,” where students can take college courses during the high school day and get college credit. In fact, the number of high school students taking at least one college course has risen to 34 percent, up from just 10 percent in 2010, according to data from the National Alliance of Concurrent Enrollment Partnerships.

But Burnsville’s program is unusual in offering a full two-year program within its building, rather than just isolated courses or transportation to nearby colleges for part of a day.

“They really are cohorted like they would probably feel in a freshman dorm,” says Rebecca Akerson, who coordinates the Associate of Arts pathway program at the school, of the students in the program, who take most of their courses together. “They’ve gotten to know each other well. When you think about college, that’s what you’re thinking about.”

It’s a stark example of how the line between high school and college is blurring for more students. While such programs may help students access college who may not have been able to before, they also raise questions about the purpose of high school, about what social opportunities might be lost, and about whether the trend pushes students to make decisions about their future careers at too young of an age.

But college is not the only option that students can get a jump on exploring at this high school. The associate degree program is part of one of four career pathways that students can choose, pointing to careers in specialties like culinary arts, manufacturing and automotive technology.

In fact, officials have gone out of their way to highlight the variety of options, to try to attract greater diversity of students to whatever they might be interested in. For instance, the school’s “fabrication lab” — which once might have been called wood shop — is located adjacent to a high-traffic commons area, and glass walls allow anyone walking by to see what the students are doing.

“This was designed very specifically because engineering and fabrication have traditionally been a very white, male-dominated career field,” says Kathy Funston, director of strategic partnerships and pathways for the Burnsville school district. “We really did want our students of color and our females to be able to look through these glass walls and say, ‘That’s cool. I like that. Nobody’s getting dirty in there. I think I want to try that,’” Funston adds. “So it’s a way to help underrepresented populations see career areas and career fields that they would not have been exposed to either in their sphere of influence at home or at other classes. If you go to a lot of other schools these types of classes have been in a remote part of the school.”

Teachers at the school say that they work to communicate these career pathway programs early and often. That means the pathway options are a big part of the tour when middle school students look at the school, and posters featuring the four main career pathways, each with its signature color, adorn hallways throughout the building.

How is the program going? And how do students feel about these options at a time of growing skepticism about higher education?

This is the fifth episode of a podcast series we’re calling Doubting College, where we’re exploring: What happened to the public belief in college? And how is that shaping the choices young people are making about what to do after high school?

Listen to the episode on Apple Podcasts, Overcast, Spotify, YouTube or wherever you listen to podcasts, or use the player on this page.

© Photo by Jeffrey R. Young for EdSurge

Should College Become Part of High School?

Latest AI Announcements Mean Another Big Adjustment for Educators

Tech giants Google, Microsoft and OpenAI have unintentionally assigned educators around the world major homework for the summer: Adjusting their assignments and teaching methods to adapt to a fresh batch of AI features that students will enter classrooms with in the fall.

Educators at both schools and colleges were already struggling to keep up with ChatGPT and other AI tools during this academic year, but a fresh round of announcements last month by major AI companies may require even greater adjustments by educators to preserve academic integrity and to accurately assess student learning, teaching experts say.

Meanwhile, educators also have scores of new edtech products to review that promise to save them time on lesson planning and administrative tasks thanks to AI.

One of the most significant changes was OpenAI’s announcement that it would make its latest generation of chatbot, which it dubbed GPT-4o, free to anyone. Previously, only an older version of the tool, GPT-3.5, was free, and people had to pay at least $20 a month to get access to the state-of-the-art model. The new model can also accept not just text, but spoken voice inputs and visual inputs, so that users can do things like share a still photo or image of their screen with the chatbot to get feedback.

“It’s a game-changing shift,” says Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing. He says that when many educators experimented with the previous free version of ChatGPT, many came away unimpressed, but the new version will be a “huge wake-up call” for how powerful the technology is, he adds.

And now that students and professors can talk to these next-generation chatbots instead of just type, there’s fresh concern that the so-called “homework apocalypse” unleashed by earlier versions of ChatGPT will get worse, as professors may find it even harder to design assignments that students can’t just have these AI bots complete for them.

“I think that’s going to really challenge what it means to be an educator this fall,” Watkins adds, noting that the changes mean that professors and teachers may not only need to change the kind of assignments they give, but they may need to rethink how they deliver material as well now that students can use AI tools to do things like summarize lecture videos for them.

And education appears to be an area identified by tech companies as a “killer application” of AI chatbots, a use case that helps drive adoption of the technology. Several demos last month by OpenAI, Google, and other companies honed in on educational uses of their latest chatbots. And just last week OpenAI unveiled a new partnership program aimed at colleges called ChatGPT Edu.

“Both Google and OpenAI are gunning for education,” says José Bowen, a longtime higher ed leader and consultant who co-wrote a new book called “Teaching with AI.” “They see this both as a great use case and also as a tremendous market.”

Changing Classes

Tech giants aren’t the only ones changing the equation for educators.

Many smaller companies have put out tools in recent months targeted at educational uses, and they are marketing them heavily on TikTok, Instagram and other social media platforms to students and teachers.

A company called Turbolearn, for instance, has pushed out a video on TikTok titled “Why I stopped taking notes during class,” which has been viewed more than 100,000 times. In it, a young woman says that she discovered a “trick” when she was a student at Harvard University. She describes opening up the company’s tool on her laptop during class and clicking a record button. “The software will automatically use your recording to make notes, flashcards and quiz questions,” she says in the promotional video.

New AI tools can make audio recordings of lectures and automatically create summaries and flashcards of the material. Some educators worry that it will keep students from paying attention.

While the company markets this as a way to free students so they can focus on listening in class, Watkins worries that skipping notetaking will mean students will tune out and not do the work of processing what they hear in a lecture.

Now that such tools are out there, Watkins suggests that professors look for more ways to do active learning in their classes, and to put more of what he called “intentional friction” in student learning so that students are forced to stop and participate or to reflect on what is being said.

“Try pausing your lecture and start having debates with your students — get into small group discussions,” he says. “Encourage students to do annotations — to read with pen or pencil or highlighter. We want to slow things down and make sure they’re pausing for a little while,” even as the advertisements for AI tools promise a way to make learning speedier and more efficient.

Slowing down is the advice that Bonni Stachowiak has for educators as well. Stachowiak, who is dean of teaching and learning at Vanguard University, points to recent advice by teaching guru James Lang to “slow walk” the use of AI in classrooms, by keeping in mind fundamental principles of teaching as educators experiment with new AI tools.

“I don’t mean resisting — I don’t think we should stick our head in the sand,” says Stachowiak. “But it’s OK to be slowly reflecting and slowly experimenting” with these new tools in classrooms, she adds. That’s especially true because keeping up with all the new AI announcements is not realistic considering all the other demands of teaching jobs.

The tools are coming fast, though.

“The maddening thing about all of this is that these tools are being deployed publicly in a grand experiment nobody asked for,” says Watkins, of the University of Mississippi. “And I know how hard it is for faculty to carve out time for anything outside of their workload.”

For that reason, he says college and school leaders need to be driving efforts to make more systematic changes in teaching and assessment. “We’re going to have to really dig in and start thinking about how we approach teaching and how students approach learning. It’s something that the entire university is going to have to think about.”

The new tools will likely mean new financial investments for schools and colleges as well.

“At some point AI is going to become the next big expense,” Bowen, the education consultant, told EdSurge.

Even though many tools are free at the moment, Bowen predicts these tools will end up costing colleges at a time when budgets are already tight.

Saving Time?

Plenty of the newest AI tools for education are aimed at educators, promising to save them time.

Several new products, for instance, allow teachers to use AI to quickly recraft worksheets, test questions and other teaching materials to change the reading level, so that a teacher could take an article from a newspaper and quickly have it revised so that younger students can better understand it.

“They will literally rewrite your words to that audience or that purpose,” says Watkins.

Such features are in several commercial products, as well as in free AI tools — just last month, the nonprofit Khan Academy announced that it would make its AI tools for teachers free to all educators.

“There’s good and bad with these things,” Watkins adds. On a positive note, such tools could greatly assist students with learning disabilities. “But the problem is when we tested this,” he adds, “it helped those students, but it got to the point where other students said, ‘I don’t have to read anything ever again,’ because the tool could also summarize and turn any text into a series of bullet points.”

Another popular feature with new AI services is to try to personalize assignments by adapting educational materials to a student’s interest, says Dan Meyer, vice president of user growth at Amplify, a curriculum and assessment company, who writes a newsletter about teaching mathematics.

Meyer worries that such tools are being overhyped, and that they may have limited effectiveness in classrooms.

“You just can't take the same dull word problems that students are doing every day and change them all to be about baseball,” he says. “Kids will wind up hating baseball, not loving math.”

He summed up his view in a recent post he titled, “Generative AI is Best at Something Teachers Need Least.

Meyer worries that many new products start with what generative AI can do and try to push out products based on that, rather than starting with what educators need and designing tools to address those challenges.

At the college level, Bowen sees potential wins for faculty in the near future, if, say, tools like learning management systems add AI features that can do tasks like build a course website after the instructor feeds it a syllabus. “That’s going to be a real time saver for faculty,” he predicts.

But teaching experts note that the biggest challenges will be finding ways to keep students learning while also preparing them for a workplace that seems to be rapidly adopting AI tools.

Bowen hopes that colleges can find a way to focus on teaching students the skills that make us most human, as AI takes over routine tasks in many white-collar industries.

“Maybe,” he says, “this time we’ll realize that the liberal arts really do matter.”

© Zapp2Photo / Shutterstock

Latest AI Announcements Mean Another Big Adjustment for Educators

Should Chatbots Tutor? Dissecting That Viral AI Demo With Sal Khan and His Son

Should AI chatbots be used as tutors?

That question has been in the air since ChatGPT was released in late 2022, and since then many developers have experimented with using the latest generative AI technology as a tutor. But not everyone thinks this is a good idea, since the tech is prone to “hallucinations,” where chatbots make up facts, and there’s the bigger issue of whether any machine can fill in for a human in something as deeply personal as one-on-one tutoring.

A video demo of the latest version of ChatGPT tutoring a student that went viral on YouTube has brought fresh attention to this question. In it, Sal Khan, founder of Khan Academy, which has been building a tutoring tool with ChatGPT, sits watching his 15-year old son Imran learn a math concept from a talking version of the chatbot running on an iPad, which can also see what the student is typing on the tablet. As Sal Khan looks on nodding, the chatbot asks his son a question in a friendly female voice about triangles, and Imran answers while indicating which side of the triangle he means using a stylus and tapping on the iPad screen. It’s an interaction that might have seemed like science fiction a couple of years ago. (And that level of functionality isn’t yet available for users.)

Khan has become one of the most well-known boosters of using generative AI for tutoring, and he has a new book that makes an enthusiastic case for it. The book is called “Brave New Words: How AI Will Revolutionize Education (and Why That's a Good Thing).

But his book, and that demo, are also attracting some pushback from teaching experts who think AI may have lots of uses in education, but that tutoring should be reserved for humans who can motivate and understand the students they work with.

For this week’s EdSurge Podcast, we talked with Khan to hear more about his vision of AI tutors and the arguments from his recent book. And we also heard from Dan Meyer, vice president of user growth at Amplify, a curriculum and assessment company, who writes a newsletter about teaching mathematics where he has raised objections to the idea of using AI chatbots as tutors.

“The kind of math that we saw on there,” Meyer said, referring to the demo, “was an operational problem well summarized in a single diagram that results in a single number. And those have always been the kind of problems that computers have supported students fairly nimbly in solving.” The bigger question, he argues, will be how such chatbots will handle more conceptual problems. And, he asks how well such bots will work “for the average student who's dealing with distraction and feeling socially isolated and not interested in talking to Scarlett-Johansson-esque voices as a tutor bot?”

Hear the full conversation on this week’s episode. Listen on Apple Podcasts, Overcast, Spotify, or wherever you listen to podcasts, or use the player on this page. Or read a partial transcript below, lightly edited for clarity.

EdSurge: When we last talked with you for the podcast, Khan Academy had just released your group’s chatbot tutor, Khanmigo. At the time you were rolling it out slowly because there were many questions about using AI chatbots in education. What was your biggest worry then, and how did the testing go?

Sal Khan: When we launched back in March of 2023, I think the biggest worry was how we would be received by the education community. This was only three or four months after ChatGPT had been released. And obviously the reception to ChatGPT was not a positive one, for good reason. It could be used to cheat. It had no guardrails on it. It was making math errors. It was hallucinating. And so here we are, an education nonprofit that hopefully a lot of folks trust to have high-quality work. And then people might say, ‘Hey, wow, Khan Academy is going with both feet into this AI thing.’

The good news is that the reception was actually more positive than we expected. So four or five months after the release of ChatGPT, most school systems, most educators were saying, ‘You know what, ChatGPT still is a little bit shady for education purposes, but the underlying technology of it is really potentially powerful for helping kids learn the things that we've always tried to teach them, and this type of technology is going to be part of their future. So we should think about how we can expose kids to it, but in a way that it doesn't cheat, in a way that there's guardrails, in the way that we can make sure that everything's on the up-and-up on data security and privacy and that it's pedagogically designed.’

And so when we were able to come with Khanmigo at around that time, the reception has been very positive.

I'll say it's also been a bit of a transition internally at Khan Academy because it is a new muscle that we've been building. … We've always worked on software that personalizes things, videos — I still make videos — and exercises, teacher tools, in a more traditional sense, and now we're moving toward this artificial intelligence world. That is exciting, but it also has a lot of things to keep in consideration. I think it's also been a bit of a transition for our team to feel good and confident and comfortable with where we're going.

It sounds like Khan Academy will continue to make videos?

The rate of change of artificial intelligence is so fast that it feels like it's irresponsible if we don't have these conversations like, ‘How long will Khan Academy videos be relevant?’ A lot of folks probably saw the recent OpenAI demo of me and my son. Will a student find value in a Khan Academy video in that world, or as much value?

A lot of our resources historically have been creating these really high-quality exercises. We've created over 100,000 exercise items on Khan Academy, and that takes a lot of resources. Today, the AI is not good enough to create exercises that are high quality, aligned to standards and are error-free.

So AI is not replacing your job of making educational videos?

My vanity wants to say no, but I don't know. I don't know.

I do want to be clear. I think the safest job in all of this is that of the teacher. I make that very clear in my book, and I'm not just saying it because people want to hear it, but it's that human element of it all being in the room helping guide students, keeping them on task, and you need to be physically there to really, truly keep them on task, to forge those human connections. …

But I think a lot of the other pieces that edtech has traditionally worked on or even other parts of the education system, maybe some of the more administrative tasks, I think it is important for everyone to be wondering how AI might change that.

You note in your book that back when you were an undergraduate at MIT, you originally wanted to be an AI researcher. Why were you drawn to that area?

I've always been fascinated by, ‘What could we learn potentially from technology?’ And I've always read a lot of science fiction books about maybe that could start pushing the frontiers of and even helping us understand what is intelligence and what is consciousness. But I've also been fascinated by the potential of human intelligence. And I've also always been fascinated by the intersection of the two.

And yes, when I was a freshman at MIT, I sought out for my freshman adviser and he ended up being my freshman adviser, Patrick Henry Winston, who was head of the artificial intelligence laboratory. I got in line to take a course with Marvin Minsky and got in. And so if you asked me in 1994 or 1995 what I wanted to do, I would say, ‘Yeah, I might want to be an AI researcher.’

Back then it sounds like you were discouraged by the level of technology at the time, but clearly we’re in a new phase of AI development. Do you think AI is now ready to serve as a viable tutor?

I think it can already do parts of it. I don't think it's able to do the full job, but I think that the technology is improving so fast that you definitely will never say never. And in fact, a lot of things that seem like science fiction are going to be reality in about two years.

[At Khan Academy] we've always been trying to use technology to approximate what a great tutor would do in terms of personalized learning and then also leverage technology to scale that to as many people as possible. And we've never viewed this as somehow a substitute for a teacher. In fact, we said, ‘Hey, this could be really valuable in a teaching setting.’ In fact, it's most valuable in a teaching setting because a teacher's in a class of 30, these kids are at all different levels. Every teacher knows that. How do you address their individual needs? Well, if you had support from a teaching assistant who's also their tutor, that's kind of what Khan Academy has always aspired to be.

I talked with a technologist who worked at IBM and had worked on IBM's Watson many years ago and was asked to use it to build an AI tutor. But after years of work he concluded that it can’t be done, and that it’s not the best way to use AI in education. What would you say to that argument?

Actually when you talk to a lot of the AI researchers, and we've probably helped skew this conversation, the thing that they're most excited about for the next generation models is the tutoring use case because people understand it's a socially positive use case. Obviously there's a bunch of negative use cases of AI — deepfakes, fraud, etc.

I think you've had many people work on this problem for decades using more basic forms of artificial intelligence. I encourage that researcher to watch that video of the GPT-4o tutoring demo with myself and my son.

Dan Meyer recently wrote that while these AI tutors might work for a small percentage of students, most need the kind of human relationship that just can’t be replicated with AI right now. Will a broad range of students want the kind of Khanmigo tutor you show in your demo?

I mean, I think most kids would rather chat or talk to their friends than go to school altogether, than sit through a lecture, than do their homework, etc. And this is why one of the many important things that a teacher does is make sure that students are focused and engaged on the thing that matters most.

There's a broad group of students that, in the moment where they need to understand a concept, where this can be very useful for them. I agree that it's a subset of students, let's call it 10 or 15 percent of students who have maintained their curiosity and might automatically keep going to the AI. And for those students, this is a field day, this is a playground, this is awesome for them. I think there's a broader set of students who are broadly disengaged from what they're doing, and you need to figure out ways to engage them more. And this is one of the many reasons why we view involving teachers in this journey as so important. Letting them know what's going on with the AI. We're working on them being able to assign AI-based activities.

There’s a passage in your book where you describe Khanmigo having a session with a student and then reporting back to their teacher, and you write it might go say something like, “We worked on the paper for about four hours. Sal initially had trouble coming up with a thesis, but I was able to help him by asking some leading questions. The outlining went pretty smoothly. I just had to help him ensure that the conclusion really brought everything together. … based on the rubric for the assignment, I'd recommend Sal get a B plus on the assignment. Here is a detailed breakdown of how I rated this paper in the dimensions of the rubric.” In some ways, this doesn’t leave much left for the teacher to do. What would you say to teachers who worry AI could replace them?

I think every K-12 teacher will look at tenured professors at the local university with envy because those professors have a lot of support. They have these grad students who essentially do exactly what that example the AI was doing. So if you told every teacher in America, ‘Hey, we just found some money and we're going to use it to hire some amazing teaching assistants that can help you write lesson plans, create rubrics, tutor your students, report back to you, what's going on and do preliminary grading. You're still the teacher, you're in charge, but it'll save you the teacher 10, 15 hours of your week. Do you want that?’ And I think the great majority of teachers will say, ‘Hallelujah. Yes, I definitely want that.’ I'm serious that I don't think it in any way undermines the teacher. I think it elevates the teacher.

Back to that recent demo of the next-generation AI tutor. I’ve heard that your son already knew the material being asked and was sort of role-playing there.

Yeah, OpenAI said, ‘Hey, can you bring with you a student who can sign a media release who doesn't work for one of our competitors?’ And I was like, I guess I'm going to bring my son. But yeah, my son, to his credit, he's more low-ego than I am. I mean, he took calculus in seventh grade. He knows what a hypotenuse is. But it made a better demo for him to pretend that he did not know a hypotenuse is because it corrected him, etc.

But yeah, it is powerful to see it in action with a student where it can see what they're drawing and what they're saying, and it's interacting verbally in a very natural way.

How long until the technology in that demo is actually fully functional in your tutoring chatbot?

I think we're a year or a year and a half away from that. But even then to the earlier part of our conversation, even when it's that awesome, I don't know if every student in the world is just going to run to it.

We have a nonprofit called Schoolhouse.world, which gives free live tutoring over Zoom. But still, not every student who finds out about it runs to it. So the AIs are going to get better. There's going to be other things like Schoolhouse World. But we're still going to need engaged parents and teachers that can help motivate and drive kids to get the help that they need.

Should Chatbots Tutor? Dissecting That Viral AI Demo With Sal Khan and His Son

Professors Try ‘Restrained AI’ Approach to Help Teach Writing

When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.

This story also appeared in Fast Company.

But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.

To be clear, these professors — Suguru Ishizaki and David Kaufer — did also worry that generative AI tools could easily be abused by students. And it’s still a concern.

They had an idea, though, for how they could set up a unique set of guardrails that would make a new kind of teaching tool that could help students get more of their ideas into their assignments and spend less time thinking about formatting sentences.

“When everyone else was afraid that AI was going to hijack writing from students,” remembers Kaufer, “We said, ‘Well if we can restrain AI, then AI can reduce many of the remedial tasks of writing that keep students from really [looking] to see what’s going on with their writing.”

The professors call their approach “restrained generative AI,” and they’ve already built a prototype software tool to try it in classrooms — called myScribe — that is being piloted in 10 courses at the university this semester.

Kaufer and Ishizaki were uniquely positioned. They have been building tools together to help teach writing for decades. A previous system they built, DocuScope, uses algorithms to spot patterns in student writing and visually show those patterns to students.

A key feature of their new tool is called “Notes to Prose,” which can take loose bullet points or stray thoughts typed by a student and turn them into sentences or draft paragraphs, thanks to an interface to ChatGPT.

“A bottleneck of writing is sentence generation — getting ideas into sentences,” Ishizaki says. “That is a big task. That part is really costly in terms of cognitive load.”

"A bottleneck of writing is sentence generation — getting ideas into sentences,” Ishizaki says. “That is a big task. That part is really costly in terms of cognitive load.”
— Suguru Ishizaki

In other words, especially for beginning writers, it’s difficult to both think of new ideas and keep in mind all the rules of crafting a sentence at the same time, just as it’s difficult for a beginning driver to keep track of both the road surroundings and the mechanics of driving.

“We thought, ‘Can we really lighten that load with generative AI?” he says.

Kaufer adds that novice writers often shift too early in the writing process into making fragments of ideas they put down into carefully crafted sentences, when they might just end up later deleting those sentences because the ideas may not fit into their final argument or essay.

“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”

He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.

Taming the Chatbot

Building their “notes to prose” feature took some doing, the professors say.

In their early experiments with ChatGPT, when they put in a few fragments and asked it to make sentences, “what we found is it starts to add a lot of new ideas into the text,” says Ishizaki. In other words, the tool tended to go even further in completing an essay by adding in other information from its vast stores of training data.

“So we just came up with a really lengthy set of prompts to make sure that there are no new ideas or new concepts,” Ishizaki adds.

The technique is different from other attempts to focus the use of AI for education, in that the only source the myScribe bot draws from is the student’s notes rather than a wider dataset.

Stacie Rohrbach, an associate professor and director of graduate studies in the School of Design at Carnegie Mellon, sees potential in tools like those her colleagues created.

“We’ve long encouraged students to always do a robust outline and say, ‘What are you trying to say in each sentence?” she says, and she hopes that “restrained AI” approaches could help that effort.

And she says she already sees student writers misuse ChatGPT and therefore believes some restraint is needed.

“This is the first year that I saw lots of AI-generated text,” she says. “And the ideas get lost. The sentences are framed correctly, but it ends up being gibberish.”

John Warner, an author and education consultant who is writing a book about AI and writing, says he wondered whether the myScribe tool would be able to fully prevent “hallucinations” by the AI chatbot, or instances where tools insert erroneous information.

“The folks that I talk to think that that’s probably not possible,” he says. “Hallucination is a feature of how large language models work. The large language model is absent judgment. You may not be able to get away from it making something up. Because what does it know?”

"A lot of these tools want to make a process efficient that has no need to be efficient.”
— John Warner

Kaufer says that their tests so far have been working. In an email follow-up interview he wrote: “It's important to note that ‘notes to prose’ operates within the confines of a paragraph unit. This means that if it were to exceed the boundaries of the notes (or 'hallucinate', as you put it), it would be readily apparent and easy to identify. The worry about AI hallucinating would expand if we were talking about larger discourse units.”

Ishizaki, though, acknowledged that it may not be possible to completely eliminate AI hallucinations in their tool. “But we are hoping that we can restrain or guide AI enough to minimize ‘hallucinations’ or inaccurate or unintended information so that writers can correct them during the review/revision process.”

He described their tool as a “vision” for how they hope the technology will develop, not just a one-off system. “We are setting the goal toward where writing technology should progress,” he says. “In other words, the concept of notes to prose is integral to our vision of the future of writing.”

Even as a vision, though, Warner says he has different dreams for the future of writing.

One tech writer, he says, recently noted that ChatGPT is like having 1,000 interns.

“On one hand, ‘Awesome,’” Warner says. “On the other hand, 1,000 interns are going to make a lot of mistakes. Interns early on cost you more time than they save, but the goal is over time that person makes less and less supervision, they learn.” But with AI, he says, “the oversight doesn’t necessarily improve the underlying product.”

In that way, he argues, AI chatbots end up being “a very powerful tool that requires enormous human oversight.”

And he argues that turning notes into text is in fact the important human process of writing that should be preserved.

“A lot of these tools want to make a process efficient that has no need to be efficient,” he says. “A huge thing happens when I go from my notes to a draft. It’s not just a translation — that these are my ideas and I want them on a page. It’s more like — these are my ideas, and my ideas take shape while I’m writing.”

Kaufer is sympathetic to that argument. “The point is, AI is here to stay and it’s not going to disappear,” he says. “There’s going to be a battle over how it’s going to be used. We’re fighting for responsible uses.”

© Phonlamai Photo / Shutterstock

Professors Try ‘Restrained AI’ Approach to Help Teach Writing

What Brain Science Says About How to Better Teach Teenagers

Ellen Galinsky has been on a seven-year quest to understand what brain science says about how to better teach and parent adolescent children. The past few years have seen advancements in our understanding of this time — where the brain is going through almost as much change as during the earliest years of a child’s life.

In the past, Galinsky says, researchers and educators have focused too much on portraying the emotional turmoil and risky decision-making that is typical in adolescence as negative. “The biggest breakthrough,” she argues, “is that we now understand that what we saw as problematic, what we saw as deviant, what we saw as immature, was in fact a developmental necessity.”

For her research, Galinsky, who is co-founder of the nonprofit and nonpartisan Families and Work Institute, also surveyed nearly 2,000 parents and students, and found that a large percentage of parents looked at teenage years as a negative time that would be fraught, while students felt they were unfairly stereotyped and misunderstood. She’s gathered her results in a new book, “The Breakthrough Years: A New Scientific Framework for Raising Thriving Teens.

What her findings mean for educators, she argues, is that lessons for adolescents should be designed to lean into this period of human development.

“Adolescence is a time when young people are moving out into the world — think of the baby bird as leaving the nest,” she says. “And it's important for them to be exploratory. They react very strongly to experiences because they need to understand what's safe, what's not safe, whom they can trust, whom they can't trust, where they belong, where they don't belong, and who they want to be and who they are in a world that is much extended from their families.”

She hopes to reframe this period of development as what she calls “a time of possibility.”

And the work has led her to strong views on the question of whether or not to ban smartphones in schools.

Hear the full conversation on this week’s episode. Listen on Apple Podcasts, Overcast, Spotify, or wherever you listen to podcasts, or use the player on this page. Or read a partial transcript below, lightly edited for clarity.

EdSurge: What's happening in the brain in this phase of human development?

Ellen Galinsky: I love the analogy that Jennifer Silvers from UCLA used. She talked about it as a time when you're laying new roads. And what that means is that the connections among different parts of the brain are being formed and strengthened during adolescence, and she says if it's a stormy day, sometimes the concrete can get wet and mucky and messy, and that's the emotionality of adolescence.

But it is a time when these new connections are being made that help develop particularly what we call executive function skills. And that is a name that I find is pretty misunderstood. If people know it at all, it sounds like, ‘Shut up, sit still, listen to the teacher, be compliant, obey, organize your notebook, remember to bring your homework’ — those kinds of managerial skills. And in part that’s true, these are the brain-based skills that underlie our ability to set goals.

But executive function skills are always driven by goals. It's a time when we can then understand the landscape, the social landscape that we're in. We can understand our own perspective, the perspectives of others and how those differ from our own perspective. It's a time when we learn to communicate. I don't mean just talk, talk, talk. I mean thinking about what we say and better understanding how it's going to be heard by others. It's a time when we can learn to collaborate, which means dealing with the conflict that relationships with people and collaborating can bring.

This country could use a little executive function skills right now and learning how to collaborate. It's a time when we learn how to problem-solve, and that has different components — including making meaning of the situation, thinking creatively in terms of solutions, not just what you've always done, but how might I solve this in a different way? And then understanding what works or what wouldn't work about that solution.

In other words, evaluating solutions, or relational reasoning as it's called in the literature. And then critical thinking, like making a decision on the basis of what you think is valid and accurate information and going forth in implementing that decision. It's also a time when we learn how to take on challenges. Now, there are some core skills, brain-based skills that underlie this, and in addition to people thinking that executive function skills are ‘shut up, you still listen to the teacher, listen to the parents,’ also people think of them as, sometimes, soft skills. These are the most neurocognitive skills we have. They're the part of the brain that coordinates our social, emotional and behavioral capacities in order to achieve goals.

There is this idea that school is mainly for academic content and that's what is usually measured on statewide tests of performance. But it sounds like you're arguing that soft skills are even more important in the teen years than academic skills.

I think they're called soft skills to differentiate them from academic skills, but they're not soft. They're really hard skills. They are pulling together all of our capacities so that we can achieve what we want to achieve and live intentionally. So these are very strongly neurocognitive skills and not something soft and squishy that is beside the point.

We tend to think of learning in the early years as about numbers and letters and math and learning to read. And those sorts of things are critical, but these soft skills are the skills that help us learn those numbers and letters and learning how to do math and learning how to read.

So we have 20 years of research that shows that these soft skills are more predictive of success in school and in life. These skills are more predictive than or as predictive as IQ or socioeconomic status, which are the big things in predicting how well we do in life.

You talk about something I haven’t heard much, which is that schools are often too future-focused, and you quote a 16-year-old who says: “I feel like everything is for the future. In middle school, everyone's pressuring you to be ready for high school. In high school, everyone's pressuring you to be ready for college. In college, everyone's pressuring you to be ready for life.” Can you say more about this?

I can go back historically to 1992 when the first President Bush created educational goals, and the first educational goal was that young children will be ready for school. And that, I think at least in my many years in education, ushered in the period of ‘readiness.’ And we became ready for school and then ready for college and then ready for life. And they work in the sense that people got it that it was a way of understanding the importance of education.

But it has had its downside, I think. Adults have to learn to live in the now. Think about how many books are written to help us as grown-ups be in the present, pay attention to whom we're with. Not always be focusing on our to-do list and what's in the future.

Readiness is important. I'm not throwing the baby out with the bath water. But we need to be in the nowness, too. We need to be able to help children live these years. In that particular group where you just quoted a 16-year-old, another 16-year-old said, ‘My parents are always saying, these are the best years of my life. But why can't I live them? They want to go back to them, but they're not letting me live them now.’

I have to ask you about a big topic in the news these days, about whether to keep smartphones out of schools and keep people younger than 16 off social media. The biggest proponent of this right now is Jonathan Haidt, who has a new book called “The Anxious Generation, How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness.” Do you agree with Haidt’s argument there, that teens would be far better off without access to social media and smartphones during this developmental time?

I don't have a Yes or No reaction. I think Haidt has raised a very important issue, which is ‘What are cellphones doing in our society?’ I wish that he hadn't called it an anxious generation, though. That's just stereotyping kids. And I wish that he hadn't freaked out parents so that they overreact. Parents are waiting for bad news about their kids. We want to protect our kids. We want them to be safe. We want them to have a good life. Being freaked out about something doesn't always help us do that.

The science is correlational. He does eventually say that, so there isn’t proof that phones and social media are causing anxiety. The National Academies of Sciences put out a report in December of last year that said that the science is correlational. We don't know, particularly for all kids. For some kids there's evidence of harm, but there also is evidence of benefits.

But here's my biggest issue with Haidt. I think he wonderfully understands the importance of play, and he understands the importance of autonomy, but then [he argues for] jumping in and reacting to this without teaching kids the skills to manage it themselves. If we are banning cellphones, first of all, kids will get around it, won't they? It's the kid currency. If we're doing that in a way that doesn't involve them, we're going to repeat the mistakes that we've made with ‘stop smoking.’ Evidence shows very, very clearly that the ‘just say no’ approach in adolescence — where there's a need for autonomy — does not work. In the studies on smoking, it increased smoking.

I wish we would carry out Jon Haidt’s emphasis on autonomy, and if schools would say, look, kids agree, there are bad things about cellphones. They're distracting, they're addictive. You see people who are ‘perfect.’ You see that you weren't invited to the mall with all the girls like Taylor Swift. We can't let the use of it, though, just become negative. So there have to be some rules about it, and the kids could help the adults even come up with the rules. We don't want cellphones in the school, but how would that best work if the kids aren't part of the solution?

One of the most frequent things that young people are asking me is, ‘How am I going to have the skills to fare in the adult world if we fix problems for kids?’

If we fix problems for kids, then they're going to go to college and always be connected with us anytime they have a problem. So we'll continue to fix things for them. They're going to be taking anti-anxiety medication. I mean, I'm exaggerating, but this is the time for them to learn these skills, to begin to deal in constructive ways with society. Young people can be part of the solution, and we'll be developing skills in them. And that's my main beef with the discussion that's going on.

What advice do you have for educators to best embrace this developmental period for teens?

Risk-taking is seen as negative. We have defined it as negative risk-taking, drinking, drugs, bad driving, texting. We say, ‘Why do they make such stupid decisions, kind of risky behavior?’ And we need to understand that this is a period of their lives when they're learning to be brave.

I love the way Ron Dahl at the University of California at Berkeley says it. They have a more of a fear reaction and they are sensation seekers. The highs are higher, the lows are lower. So we need to give them opportunities to take positive risks — positive risks to help those other people who are less fortunate, positive risks to try something that might be hard for them, positive risks to stand up for something that they believe in.

We need to give them opportunities to figure out who they are, to play into their development, which is a time when they are feeling things so strongly, and give them experiences for the benefit of themselves and for the benefits of society.

For example, I think of learning to clean up a pond that is polluted, or giving to kids who don't have toys near their playground or there's just so many things. That's a positive risk. That is so cool. Doing something for the world. Things that young people care about and they're learning the skills that go along with that. They're learning that they can be contributors to society.

Listen to the full conversation on the EdSurge Podcast.

© cosmaa / Shutterstock

What Brain Science Says About How to Better Teach Teenagers

‘College for What?' High School Students Want Answers Before Heading to Campus

ST. PAUL, Minn. — What do you want to be when you grow up? That’s a question long faced by high school students. But these days, students have access to far more information than in the past about what, specifically, they could do as a job after they graduate.

And that is changing the way students are thinking about whether or not they want to go to college — or when they want to go.

These shifting attitudes were evident in March at Central High School here, at a daylong event dubbed the “Opportunity Fair.” More than 100 local businesses set up tables with company banners and flyers about what it means to work for them, with representatives on hand to answer questions.

Some of the jobs represented require college degrees. Others don’t. Some of the employers here said they have career paths for both, such as a medical-device company that looks for folks out of high school to work on their factory floor as well as college grads to join their design teams. And other companies look for talented students for entry-level jobs, with the promise to help them pay for college or more training later if needed.

“I don't know if I'm going to go to college right after I get out of high school,” said one junior. “But I think that at some point in my future when I want to get a professional job, I probably will go to college before I do that. I don't think I need to rush into it. I don't don't want to end up failing college or anything like that.”

That’s something that people who work with high school students on their choices are hearing more these days, says Liz Williams, a senior program officer for the Greater Twin Cities United Way. Part of her job is helping high schools set up programs that show students their career options.

“When I think about my own journey,” Williams said, “I have an undergraduate degree in Spanish and Portuguese, so it was a really cool thing to study. I got to travel, I got to learn languages. But it also gave me zero direction as to what careers were possible. And so I had to sort of find that on my own.”

Today students are “asking better questions,” she said. “So I actually think there's a lot of wisdom in that skepticism of, ‘I'm not sure college is right for me. I know I'm going to have to take on debt. I have a cousin, a parent who has taken on that type of debt and I see what that is like.’ They also see adults who maybe don't have debt but hate the work that they do. … And so I think that there's this trend toward taking a step back and really thinking about what they want to do, and if it is college, thinking more critically about ‘Why college?’, and ‘College for what?’”

This is the fourth episode of our podcast series Doubting College, where we’re exploring: What happened to the public belief in college? And how is that shaping the choices young people are making about what to do after high school?

For this installment we’re focusing on the opportunities young people have these days, the changing ways that high school counselors and education leaders are presenting those choices, and what these students think about their options.

Listen to the episode on Apple Podcasts, Overcast, Spotify, YouTube or wherever you listen to podcasts, or use the player on this page.

© Photo by Jeffrey R. Young / EdSurge

‘College for What?' High School Students Want Answers Before Heading to Campus

Can ‘Linguistic Fingerprinting’ Guard Against AI Cheating?

Since the sudden rise of ChatGPT and other AI chatbots, many teachers and professors have started using AI detectors to check their students’ work. The idea is that the detectors will catch if a student has had a robot do their work for them.

The approach is controversial, though, since these AI detectors have been shown to return false positives — asserting in some cases that text is AI-generated even when the student did all the work themselves without any chatbot assistance. The false positives seem to happen more frequently with students who don’t speak English as their first language.

So some instructors are trying a different approach to guard against AI cheating — one that borrows a page out of criminal investigations.

It’s called “linguistic fingerprinting,” where linguistic techniques are used to determine whether a text has been written by a specific person based on analysis of their previous writings. The technology, which is sometimes called “authorship identification,” helped catch Ted Kaczynski, the terrorist known as the Unabomber for his deadly series of mail bombs, when an analysis of Kaczynski’s 35,000-word anti-technology manifesto was matched to his previous writings to help identify him.

Mike Kentz is an early adopter of the idea of bringing this fingerprinting technique to the classroom, and he argues that the approach “flips the script” on the usual way to check for plagiarism or AI. He’s an English teacher at Benedictine Military School in Savannah, Georgia, and he also writes a newsletter about the issues AI raises in education.

Kentz shares his experience with the approach — and talks about the pros and cons — in this week’s EdSurge Podcast.

Hear the full story on this week’s episode. Listen on Apple Podcasts, Overcast, Spotify, or wherever you listen to podcasts, or use the player on this page. Or read a partial transcript below, lightly edited for clarity.

EdSurge: What is linguistic fingerprinting?

Mike Kentz: It's a lot like a regular fingerprint, except it has to do with the way that we write. And it's the idea that we each have a unique way of communicating that can be patterned, it can be tracked, it can be identified. If you have a known document written by somebody, you can kind of pattern their written fingerprint.

How is it being used in education?

If you have a document known to be written by a student, you can run a newer essay they turn in against the original fingerprint, and see whether or not the linguistic style matches the syntax, the word choice, and the lexical density. …

And there are tools that produce a report. And it's not saying, ‘Yes, this kid wrote this,’ or ‘No, the student did not write it.’ It's on a spectrum, and there's tons of vectors inside the system that are on a sort of pendulum. It's going to give you a percentage likelihood that the author of the first paper also wrote the second paper.

I understand that there was recently a time at your school when this approach came in handy. Can you share that?

The freshman science teacher came to me and said, ‘Hey, we got a student who produced a piece of writing that really doesn't sound like him. Do you have any other pieces of writing, so that I can compare and make sure that I'm not accusing him of something when he doesn't deserve it?’ And I said, ‘Yeah, sure.’

And we ran it through a [linguistic fingerprint tool] and it produced a report. The report confirmed what we thought that it was unlikely to have been written by that student.

The biology teacher went to the mother — and she didn’t even have to use the report — and said that it doesn’t seem like the student wrote it. And it turned out his mom wrote it for him, more or less. And so in this case it wasn’t AI, but the truth was just that he didn't write it.

Some critics of the idea have noted that a student’s writing should change as they learn, and therefore the fingerprint based on an earlier writing sample might no longer be accurate. Shouldn’t students’ writing change?

If you've ever taught middle school writing, which I have, or if you taught early high school writing, their writing does not change that much in eight months. Yes, it improves, hopefully. Yes, it gets better. But we are talking about a very sophisticated algorithm and so even though there are some great writing teachers out there, it's not going to change that much in eight months. And you can always run a new assignment to get a fresh “known document” of their writing later in the term.

Some people might worry that since this technique came from law enforcement, it has a kind of criminal justice vibe.

If I have a situation next year where I think a kid may have used AI, I am not going to immediately go do the fingerprinting process. That's not gonna be the first thing I do. I'll have a conversation with them first. Hopefully, there's enough trust there, and we can kind of figure it out. But this, I think, is just a nice sort of backup, just in case.

We do have a system of rewards and consequences in a school, and you have to have a system for enforcing rules and disciplining kids if they step out of line. For example, [many schools] have cameras in the hallways. I mean, we do that to make sure that we have documented evidence in case something goes down. We have all kinds of disciplinary measures that are backed up by mechanisms to make sure that that actually gets held up.

How optimistic are you that this and other approaches that you're experimenting with can work?

I think we're in for a very bumpy next five years or so, maybe even longer. I think the Department of Education or local governments need to establish AI literacy as a core competency in schools.

And we need to change our assessment strategies and change what we care about kids producing, and acknowledge that written work really isn't going to be it anymore. You know my new thing also is verbal communication. So when a kid finishes an essay, I'm doing it a lot more now where I'm saying, all right. Everybody's going to go up without their paper and just talk about their argument for three to five minutes, or whatever it may be, and your job is to verbally communicate what you were trying to argue and how you went about proving it. Because that's something AI can't do. So my optimism lies in rethinking assessment strategies.

My bigger fear is that there is going to be a breakdown of trust in the classroom.

I think schools are gonna have a big problem next year, where there's lots of conflicts between students and teachers where a student says, ‘Yeah, I used [AI], but it's still my work.’ and the teacher goes, ‘Any use is too much.’

Or what's too much and what's too little?

Because any teacher can tell you that it's a delicate balance. Classroom management is a delicate balance. You're always managing kids' emotions, and where they're at that day, and your own emotions, too. And you're trying to develop trust, and maintain trust and foster trust. We have to make sure this very delicate, beautiful, important thing doesn't fall to the ground and smash into a million pieces.

Listen to the full conversation on the EdSurge Podcast.

© Pixels Hunter / Shutterstock

Can ‘Linguistic Fingerprinting’ Guard Against AI Cheating?

Los Angeles School District Launched a Splashy AI Chatbot. What Exactly Does It Do?

Balloon archways surrounded the stage as the superintendent of the Los Angeles Unified School District, Alberto Carvalho, last month announced what he hailed as a pioneering use of artificial intelligence in education. It’s a chatbot called “Ed” — an animated talking sun — which he described as: “our nation’s very first AI-powered learning-acceleration platform.”

More than 26 members of local and national media were on hand for the splashy announcement (a detail that Carvalho noted in his remarks), and the event also featured a human dressed in a costume of the shiny animated character of Ed, which has also long been a mascot of the school district, for attendees to take selfies with.

While many publications wrote about the announcement at the nation's second-largest public school, details about what the $6 million system actually does have been thin. A press release from the LA public school district said that Ed “provides personalized action plans uniquely tailored to each student,” and at one point Carvalho described the system as a “personal assistant.”

But Carvalho and others involved in the project have also taken pains to point out that the AI chatbot doesn’t replace human teachers, human counselors or even the district’s existing learning management system.

“Ed is not a replacement for anything,” Carvalho said at a mainstage presentation this month at the ASU+GSV Summit in San Diego. “It is an enhancement. It will actually create more opportunities and free our teachers and counselors from bureaucracy and allow them to do the social, interactive activity that builds on the promise of public education.”

So what, specifically, does the system do?

EdSurge recently sat down with the developers of the tool, custom-made for the district by Boston-based AI company AllHere, for a demo, to try to find out. And we also talked with edtech experts to try to better understand this new tool and how it compares to other approaches for using AI in education.

A New Tech Layer

In some ways the AI system is an acknowledgement that the hundreds of edtech tools the school district has purchased don’t integrate very well with each other, and that students don’t use many of the features of those tools.

As at many schools these days, students spend much of their learning time on a laptop or an iPad, and they use a variety of tech tools as they move through the school day. In fact, students might use a different online system for every class subject on their schedule, and log into other systems to check their grades or access supplementary resources to help with things like social well-being or college planning.

So LAUSD’s AI chatbot serves as a new layer that sits on top of the other systems the district already pays for, allowing students and parents to ask questions and get answers based on data pulled from many existing tools.

“School systems oftentimes purchase a lot of tools, but those are underutilized,” Joanna Smith-Griffin, chief executive officer at AllHere, told EdSurge. “Typically these tools can only be accessed as independent entities where a student has to go through a different login for each of these tools,” she added. “The first job of Ed was, how do you create one unified learning space that brings together all the digital tools, and that eliminates the high number of clicks that otherwise the student would need to navigate through them all?”

The chatbot can also help students and parents who don’t speak English as their first language by translating information it displays into about 100 different languages, says Smith-Griffin.

But the system does not just sit back and wait for students and parents to ask it questions. A primary goal of Ed is to nudge and motivate students to complete homework and other, optional enrichments. That’s the part of the system leaders are referring to when they say it can “accelerate” learning.

For instance, in a demo for EdSurge, Toby Jackson, chief technology officer at AllHere, showed a sample Ed dashboard screen for a simulated account of a student named “Alberto.” As the student logs in, an animation of the Ed mascot appears, makes a corny joke, and notes that the student has three recommended activities available.

“Nice, now let’s keep it going,” Ed said after a student completed the first task. “And remember, if you stop swimming in the ocean, you’ll sink. Now, why you’re swimming in the middle of the ocean, I have no idea. But the point is, this is no time to stop.”

The Ed chatbot, displayed to users as an animated sun, guides students through assignments and extra work for their classes, and can connect to other school resources as well.

The tasks Ed surfaces are pulled from the learning management system and other tools that his school is using, and Ed knows what assignments Alberto has due for the next day and what other optional exercises fit his lessons.

The student can click on the activities, which show up in a window that automatically opens, say, a math assignment in IXL, an online system used at many schools.

The hope is that the talking sunburst known as Ed will be relatable to students, and that the experience will “feel like fun,” says Smith-Griffin. Designers tried to borrow tropes from video games, and in the demo, Ed enthusiastically says, “Alberto, you met your goal today,” and points to even more resources he could go to, including links to “Read a book,” “Get tutoring,” or “Find a library near me.” And the designers use two different versions of the digital voice for Ed, depending on the grade level of the student: A higher-pitched, more cartoon-like voice for younger students, and a slightly more serious one for those in middle and high school.

“We want to incentivize daily usage,” says Smith-Griffin. “Kids are excited about keeping their streaks up and stars.”

And she adds that the idea is to use algorithms to make personalized recommendations to each student about what will help his or her learning — the way that Netflix recommends movies based on what a user has watched in the past.

Customer Service?

Of course, most teachers already take pains to make clear to their students what assignments are due, and many teachers, especially in younger grades, employ plenty of human-made strategies like sending newsletters to parents. And learning management systems like Schoology and Seesaw already offer at-a-glance views for students and parents of what is due.

The question is whether a chatbot interface that can pull from a variety of systems will make a difference in usage of school resources.

“It’s basically customer service,” says Rob Nelson, executive director for academic technology and planning at the University of Pennsylvania who writes a newsletter about generative AI and writing.

He described the strategy as “risky,” noting that previous attempts at chatbots for tech support have had mixed results. “This feels like the beginning of a Clippy-level disaster,” he wrote recently, referring to the animated paper clip used by Microsoft in its products starting in the late 1990s, which some users found naggy or distracting. “People don’t want something with a personality,” he added, “they just want the information.”

“My initial thought was why do you need a chatbot to do that?” Nelson told EdSurge in an interview. “It just seems to be presenting links and information that you could already find when you log in.”

As he watched a video recording of the launch event for Ed the chatbot, he said, “it had the feeling of a lot of pomp and circumstance, and a lot of surface hullabaloo.”

His main question for the school district: What metrics are they using to measure whether Ed is worth the investment? “If more people are accessing information because of Ed then maybe that’s a win,” he added.

"My initial thought was why do you need a chatbot to do that?”
— Rob Nelson, executive director for academic technology and planning at the University of Pennsylvania

Officials at LAUSD declined an interview request from EdSurge for this story, though officials sent a statement in response to that question of how they plan to measure success: “It's too early to derive statistically significant metrics to determine success touch points. We will continue assessing the data and define those KPIs as we learn more.”

Setting Up Guardrails

LAUSD leaders and the designers of Ed stress that they’ve put in guardrails to avoid potential pitfalls of generative AI chatbots. After all, the technology is prone to so-called “hallucinations,” meaning that chatbots sometimes present information that sounds correct but is made-up or wrong.

“The bot is not as open as people may think,” said Jackson, of AllHere. “We run it through filters,” he added, noting that the chatbot is designed to avoid “toxicity.”

That task may not be easy, though.

“These models aren’t very good at keeping up with the latest slang,” he acknowledged. “So we get a human being involved to make that determination” if an interaction is in doubt. Moderators monitor the software, he says, and they can see a dashboard where interactions are coded red if they need to be reviewed right away. “Even the green ones, we review,” he said.

So far the system has been rolled out in a soft launch to about 55,000 students from 100 schools in the district, and officials say they’ve had no reports of misconduct by the chatbot.

Leaders at AllHere, which grew out of a project at Harvard University’s education school in 2016, said they’ve found that in some cases, students and parents feel more comfortable asking difficult or personal questions to a chatbot than to a human teacher or counselor. And If someone confides to Ed that they are experiencing food insecurity, for instance, they might be connected to an appropriate school official to connect them to resources.

An Emerging Category

The idea of a chatbot like Ed is not completely new. Some colleges have been experimenting with chatbot interfaces to help their students navigate various campus resources for a few years.

The challenge for using the approach in a K-12 setting will be making sure all the data being fed to students by the chatbot is up-to-date and accurate, says James Wiley, a vice president at the education market research firm ListEdTech. If the chatbot is going to recommend that students do certain tasks or consult certain resources, he adds, it’s important to make sure the recommendations aren’t drawing from student profiles that are incomplete.

And because chatbots are a black box as far as knowing what text they will generate next, he adds, “If I have AI there, I might not see the errors [in the data] because the layer is opaque.”

If done right, “it could be more than just a gimmick,” creating a tool that does what Waze achieves for drivers trying to find the best route on a road, only for getting through school or college.
— James Wiley, a vice president at the education market research firm ListEdTech.

He said officials at the school district should develop some kind of “governance model” to evaluate and check the data in its systems.

“The stakes here are going to be pretty high if you get it wrong,” he says.

Whether this type of system catches on at other schools or at colleges remains to be seen. One challenge, Wiley says, is that at many educational institutions, no one is in charge of the student and parent experience. So it’s not always clear whether a tech official would lead the effort, or perhaps, in a college setting, someone leading enrollment.

In the end, Wiley says the Ed chatbot is hard to quickly describe (he opted to say it’s an “engagement and personalization layer between systems and students.”)

If done right, “it could be more than just a gimmick,” he says, creating a tool that does what Waze achieves for drivers trying to find the best route on a road, only for getting through school or college.

© Still from an LAUSD promotional video

Los Angeles School District Launched a Splashy AI Chatbot. What Exactly Does It Do?

Scholar Hopes to Diversify the Narrative Around Undocumented Students

When Felecia Russell was a high school student growing up near Los Angeles, she was getting good grades and plenty of encouragement to go to college.

But when it came time to do the paperwork of applying to a campus and financial aid, Russell asked her mom for her social security number.

“My mom was like, ‘yeah, you don’t have one,’” she remembers.

Russell didn’t have a social security number because she didn’t have permanent legal status in the U.S. She was “undocumented.” She had moved to the U.S. from Jamaica when she was about 12. But she hadn’t fully understood until that moment, as she Googled for more details, how her immigration status could dash her dreams.

“All I saw online was ‘illegal, illegal, illegal,’” she remembers. And everything online seemed to tell her “that means you can’t go to college.”

On this week’s EdSurge Podcast, we tell the story of Russell’s fight to get her college degree, and how she has become an advocate for other undocumented students. (She went on to get her Ph.D. and is now an adjunct professor at California Lutheran University.)

Her biggest message is that even when colleges do work to help students who lack permanent legal status, they often aren’t paying attention to Black undocumented students, because the majority of services in this space are designed for Latino students.

“Some of it makes sense,” she says, “because the Latinx population is two-thirds of the undocumented population, so it makes sense that everything is centered around their experience.”

Yet the undocumented population in the U.S. is 6 percent Black, she says, and a sizable share of the 408,000 undocumented students in colleges are Black. Data from the Higher Ed Immigration Portal from the Presidents’ Alliance on Higher Education and Immigration, which Russell directs, shows that as of 2023, 46 percent of undocumented students at college were Hispanic, while 27 percent were Asian, 14 percent were Black and 10 percent were white. Some people identify as both Black and Latino, and commonly describe themselves as Afro Latino.

“And so it's so dangerous, because now we're forcing these people back into the shadows,” says Russell, who became a DACA recipient but as a student often didn’t feel welcome in support groups for undocumented students. “Now they don't have a space to belong.”

Russell shares her story in a new book out this month, called “Amplifying Black Undocumented Student Voices in Higher Education.

The book also includes deep research on the topic, based on extensive interviews she did with 15 Black undocumented college students. And she has recommendations for school and college leaders on how to better support the full spectrum of students facing immigration issues.

Hear the full story on this week’s episode. Listen on Apple Podcasts, Overcast, Spotify, Stitcher or wherever you listen to podcasts, or use the player on this page.

Scholar Hopes to Diversify the Narrative Around Undocumented Students
❌