Reading view

There are new articles available, click to refresh the page.

When the Teaching Assistant Is an AI ‘Twin’ of the Professor

Two instructors at Vilnius University in Lithuania brought in some unusual teaching assistants earlier this year: AI chatbot versions of themselves.

The instructors — Paul Jurcys and Goda Strikaitė-Latušinskaja — created AI chatbots trained only on academic publications, PowerPoint slides and other teaching materials that they had created over the years. And they called these chatbots “AI Knowledge Twins,” dubbing one Paul AI and the other Goda AI.

They told their students to take any questions they had during class or while doing their homework to the bots first before approaching the human instructors. The idea wasn’t to discourage asking questions, but rather to nudge students to try out the chatbot doubles.


Would you use an AI teaching assistant? Share your thoughts.


“We introduced them as our assistants — as our research assistants that help people interact with our knowledge in a new and unique way,” says Jurcys.

Experts in artificial intelligence have for years experimented with the idea of creating chatbots that can fill this support role in classrooms. With the rise of ChatGPT and other generative AI tools, there’s a new push to try robot TAs.

“From a faculty perspective, especially someone who is overwhelmed with teaching and needs a teaching assistant, that's very attractive to them — then they can focus on research and not focus on teaching,” says Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi and director of the university’s AI Summer Institute for Teachers of Writing.

But just because Watkins thought some faculty would like it doesn’t mean he thinks it’s a good idea.

“That's exactly why it's so dangerous too, because it basically offloads this sort of human relationships that we're trying to develop with our students and between teachers and students to an algorithm,” he says.

On this week’s EdSurge Podcast, we hear from these professors about how the experiment went — how it changed classroom discussion but sometimes caused distraction. A student in the class, Maria Ignacia, also shares her view on what it was like to have chatbot TAs.

And we listen in as Jurcys asks his chatbot questions — and admits the bot puts things a bit differently than he would.

Listen to the episode on Spotify, Apple Podcasts, or on the player on this page.

When the Teaching Assistant Is an AI ‘Twin’ of the Professor

Finding the Right Technology for Early Elementary Classrooms

I can still vividly recall the chaotic scene of introducing iPads into Kindergarten classrooms. Picture it: a room bustling with eager five-year-olds unaccustomed to center procedures and five iPads as the hottest commodity amidst blocks, dolls and traditional learning stations. What’s the Kindergarten version of the Hunger Games? Imagine that.

Managing a technology rollout for littles felt tough, but the real challenge didn’t hit me until I had to engage with some of the apps intended for our students. They were clunky, confusing and, more often than not, frustrating for our young learners. These children were still mastering the grip of big pencils and manipulating objects with their tiny fingers. Yet, they were expected to click on tiny multiple-choice buttons or log in independently.

Managing a technology rollout for littles felt tough, but the real challenge didn’t hit me until I had to engage with some of the apps intended for our students. They were clunky, confusing and, more often than not, frustrating for our young learners.

Amidst this chaos, I realized the importance of finding technology that caters to the needs of our youngest learners. It’s not just about having the latest gadgets; it's about leveraging technology to support their development and enrich their learning journey. Below are nine key features I look for during the edtech selection process.

1. Safe and Age-Appropriate

Ensure that the content is safe and suitable for young learners, with appropriate levels of challenge. Look for tools that provide a safe and secure online environment, with features such as password protection, privacy settings and age-appropriate content filters. Avoid apps and programs that include ads or in-app purchases, which can be distracting and may lead to inappropriate content exposure. Choose tools that offer customizable settings, allowing teachers to adjust the difficulty level and content to meet the needs of their students.

2. Inclusive Design

Inclusive design in educational technology is crucial to ensure that all students, regardless of their abilities or backgrounds, have equal access to learning opportunities. Tools designed with inclusivity in mind can accommodate a variety of learning styles and needs. For instance, apps that offer multiple modes of interaction, such as touch, voice and visual prompts, can support students with different abilities. Research supports the efficacy of inclusive design in improving educational outcomes.

3. Engaging and Fun

Digital learning tools should be interactive and entertaining, capturing children's attention while fostering learning. Look for apps and programs that use bright colors, interesting animations and fun characters to keep students engaged. Interactive games and activities that allow students to explore and learn at their own pace are particularly effective in captivating young learners. Khan Academy Kids is a prime example, offering joyful, developmentally appropriate learning experiences that appeal to young minds.

4. Aligned With Curriculum Goals

Choose tools that align with educational standards and support your curriculum objectives. Look for apps and programs that cover key concepts and skills taught in early elementary grades, such as phonics, early literacy, basic math skills and foundational science concepts. Ensure that the content is relevant to your curriculum goals and supports the learning objectives you want to achieve in your district. Khan Academy Kids, for instance, covers a broad range of subjects, ensuring that all essential areas of early learning are addressed, with an emphasis on boosting pre-literacy skills. Appropriately aligning digital tools with curriculum standards can enhance student achievement and retention.


Young learners waiting in line for their devices

5. Easy to Navigate

The interface should be intuitive and user-friendly, allowing even young children to use the tool independently. Avoid apps and programs with complex navigation or confusing instructions. Look for tools that have simple, easy-to-understand menus and controls, with clear prompts and feedback to guide students through the learning process. Teachers should be able to quickly and easily set up and manage the tools, saving time and frustration for both teachers and students.

6. Connect School and Home

Recommended Resources:

Effective edtech tools should also bridge the gap between school and home. Parents often want to support their children's learning but may feel unsure how to do so effectively. This is where apps like Khan Academy Kids can be particularly valuable. They provide parents with the tools they need to practice essential skills, such as literacy, at home without requiring a deep foundation in teaching. With enthusiasm and a user-friendly platform, parents can engage their children in meaningful educational activities that reinforce classroom learning. Guidance and resources for parents can significantly enhance the impact of edtech tools on student learning.

7. Personalized Learning

Look for tools that leverage artificial intelligence to create personalized learning experiences that adapt to each child's unique needs and progress. AI-driven tools can provide real-time feedback, adjust the difficulty of activities based on performance and identify areas where a student may need additional support. These capabilities make learning more effective and engaging for young children. Research shows that personalized learning through AI can significantly enhance educational outcomes.

8. Insightful Assessments

Ongoing checks for understanding are a critical component of early childhood education, providing insights into student progress and areas needing improvement. Edtech tools streamline the formative assessment process, making it more efficient and less intrusive. Digital assessments offer immediate feedback, enabling teachers to quickly identify and address learning gaps. These tools also collect and analyze data over time, offering a comprehensive view of a student's development. Some platforms include built-in assessment features that help teachers track progress and tailor instruction accordingly. By enhancing teachers' ability to utilize data practices effectively, these tools support better-informed teaching strategies and improved student outcomes.

9. User-Friendly Data Tools

Select platforms that equip teachers with easy access to data and intuitive analysis tools. Effective data use is key to enhancing instruction and supporting student learning. Look for edtech solutions that offer training and professional development on data literacy, empowering teachers to integrate data-driven practices into their routines. Khan Academy Kids supports teachers with progress tracking and data visualization tools that simplify the analysis and application of student performance data. Embracing data-driven teaching can lead to more personalized and effective learning experiences for students.

By considering these features, early childhood educators can select digital tools that enhance learning and support the development of young learners in their classrooms. From interactive games to educational videos, the right tools can make a significant difference in engaging students and fostering a love of learning from an early age.

© Image Credit: Khan Academy Kids

Finding the Right Technology for Early Elementary Classrooms

Is It Fair and Accurate for AI to Grade Standardized Tests?

Texas is turning over some of the scoring process of its high-stakes standardized tests to robots.

News outlets have detailed the rollout by the Texas Education Agency of a natural language processing program, a form of artificial intelligence, to score the written portion of standardized tests administered to students in third grade and up.

Like many AI-related projects, the idea started as a way to cut the cost of hiring humans.

Texas found itself in need of a way to score exponentially more written responses on the State of Texas Assessments of Academic Readiness, or STAAR, after a new law mandated that at least 25 percent of questions be open-ended — rather than multiple choice — starting in the 2022-23 school year.

Officials have said that the auto-scoring system will save the state millions of dollars that otherwise would have been spent on contractors hired to read and score written responses — with only 2,000 scorers needed this spring compared to 6,000 at the same time last year.

Using technology to score essays is nothing new. Written responses for the GRE, for example, have long been scored by computers. A 2019 investigation by Vice found that at least 21 states use natural language processing to grade students’ written responses on standardized tests.

Still, some educators and parents alike felt blindsided by the news about auto-grading essays for K-12 students. Clay Robison, a Texas State Teachers Association spokesperson, says that many teachers learned of the change through media coverage.

“I know the Texas Education Agency didn’t involve any of our members to ask what they thought about it,” he says, “and apparently they didn’t ask many parents either.”

Because of the consequences low test scores can have for students, schools and districts, the shift to use technology to grade standardized test responses raises concerns about equity and accuracy.

Officials have been eager to stress that the system does not use generative artificial intelligence like the widely-known ChatGPT. Rather, the natural language processing program was trained using 3,000 written responses submitted during past tests and has parameters it will use to assign scores. A quarter of the scores awarded will be reviewed by human scorers.

“The whole concept of formulaic writing being the only thing this engine can score for is not true,” Chris Rozunick, director of the assessment development division at the TEA, told the Houston Chronicle.

The Texas Education Agency did not respond to EdSurge’s request for comment.

Equity and Accuracy

One question is whether the new system will fairly grade the writing of children who are bilingual or who are learning English. About 20 percent of Texas public school students are English learners, according to federal data, although not all of them are yet old enough to sit for the standardized test.

Rocio Raña is the CEO and co-founder of LangInnov, a company that uses automated scoring for its language and literacy assessments for bilingual students and is working on another one for writing. She’s spent much of her career thinking about how education technology and assessments can be improved for bilingual children.

Raña is not against the idea of using natural language processing on student assessments. She recalls one of her own graduate school entrance exams was graded by a computer when she came to the U.S. 20 years ago as a student.

What raised a red flag for Raña is that, based on publicly available information, it doesn’t appear that Texas developed the program over what she would consider a reasonable timeline of two to five years — which she says would be ample time to test and fine-tune a program’s accuracy.

She also says that natural language processing and other AI programs tend to be trained with writing from people who are monolingual, white and middle-class — certainly not the profile of many students in Texas. More than half of students are Latino, according to state data, and 62 percent are considered economically disadvantaged.

“As an initiative, it’s a good thing, but maybe they went about it in the wrong way,” she says. “‘We want to save money’ — that should never be done with high-stakes assessments.”

Raña says the process should involve not just developing an automated grading system over time, but deploying it slowly to ensure it works for a diverse student population.

“[That] is challenging for an automated system,” she says. “What always happens is it's very discriminatory for populations that don't conform to the norm, which in Texas are probably the majority.”

Kevin Brown, executive director of the Texas Association of School Administrators, says a concern he’s heard from administrators is about the rubric the automated system will use for grading.

“If you have a human grader, it used to be in the rubric that was used in the writing assessment that originality in the voice benefitted the student,” he says. “Any writing that can be graded by a machine might incentivize machine-like writing.”

Rozunick of the TEA told the Texas Tribune that the system “does not penalize students who answer differently, who are really giving unique answers.”

In theory, any bilingual or English learner students who use Spanish could have their written responses flagged for human review, which would assuage fears that the system would give them lower scores.

Raña says that would be a form of discrimination, with bilingual children’s essays graded differently than those who write only in English.

It also struck Raña as odd that after adding more open-ended questions to the test, something that creates more room for creativity from students, Texas will have most of their responses read by a computer rather than a person.

The autograding program was first used to score essays from a smaller group of students who took the STAAR standardized test in December. Brown says that he’s heard from school administrators who told him they saw a spike in the number of students who were scored zero on their written responses.

“Some individual districts have been alarmed at the number of zeros that students are getting,” Brown says. “Whether it’s attributable to the machine grading, I think that’s too early to determine. The larger question is about how to accurately communicate to the families, where a child might have written an essay and gotten a zero on it, how to explain it. It's a difficult thing to try to explain to somebody.”

A TEA spokesperson confirmed to the Dallas Morning News that previous versions of the STAAR test only gave zeros to responses that were blank or nonsensical, and the new rubric allows for zeros based on content.

High Stakes

Concerns about the possible consequences of using AI to grade standardized tests in Texas can’t be understood without also understanding the state’s school accountability system, says Brown.

The Texas Education Agency distills a wide swath of data — including results from the STAAR test — into a single letter grade of A through F for each district and school. It’s a system that feels out of touch to many, Brown says, and the stakes are high. The exam and annual preparation for it was described by one writer as “an anxiety-ridden circus for kids.”

The TEA can take over any school district that has five consecutive Fs, as it did in the fall with the massive Houston Independent School District. The takeover was triggered by the failing letter grades of just one out of its 274 schools, and both the superintendent and elected board of directors were replaced with state appointees. Since the takeover, there’s been seemingly nonstop news of protests over controversial changes at the “low-performing” schools.

“The accountability system is a source of consternation for school districts and parents because it just doesn’t feel like it connects sometimes to what’s actually happening in the classroom,” Brown says. “So any time I think you make a change in the assessment, because accountability [system] is a blunt force, it makes people overly concerned about the change. Especially in the absence of clear communication about what it is.”

Robison says that his organization, which represents teachers and school staff, advocates abolishing the STAAR test altogether. The addition of an opaque, automated scoring system isn’t helping state education officials build trust.

“There’s already a lot of mistrust over the STAAR and what it purports to represent and accomplish,” Robison says. “It doesn't accurately measure student achievement, and there’s lots of suspicion that this will deepen the mistrust because of the way most of us were surprised by this.”

© Bas Nastassia / Shutterstock

Is It Fair and Accurate for AI to Grade Standardized Tests?

3 Things Educators and Edtech Suppliers Need to Talk About

The advancements in technology are reshaping how we teach and learn, bringing new opportunities and challenges. To address such challenges, a concerted effort must be made to ensure that newer technologies are implemented thoughtfully and responsibly, with a focus on enhancing the educational experience for all students. Collaboration and open dialogue are key as we navigate this terrain, ensuring innovation meets the needs of today's educational institutions.

In almost every collaboration or discussion around what educators, schools and institutions need from their educational technology, three themes rise to the surface:

  1. The need for a trusted, interoperable and flexible edtech ecosystem.
  2. The growing reliance on data and analytics to help build that ecosystem.
  3. The exploration of generative AI’s role in that ecosystem.

Ecosystem Evolution

We need to build an ecosystem that works best for all educators and supports learners. That’s why it’s so important to bring everyone together, including educators from both K-12 and higher education, edtech suppliers, non-profits and government organizations, to ensure the solutions we build benefit all.

When building up edtech resources for any learning environment, whether it be a K-12 school district, institution of higher education or professional development, there is a lot to consider. Before acquiring a new edtech system, tool or app, technology leaders need to consider privacy and security concerns. How will the technology work with other tools? Will it make life easier for already overwhelmed educators, or is it just one more item on their to-do list? Is it accessible to all learners? Does it align with the curriculum? When the needs of the institution change, will it be easy and affordable to make those changes?

Of course, following interoperability standards can help ensure the entire system works together and makes it faster, cheaper and easier to make future changes or additions to the ecosystem.

Open rubrics from the 1EdTech community can help start the vetting process on data privacy, security, accessibility and generative AI, while CASE Network 2 helps to align those tools with academic standards.

The ecosystem as a whole is making a major impact. There are initiatives to increase personalized learning and equity across districts and states, technology management solutions to lift some of the burdens from both technology departments and educators, and strategies to empower educators to use new technology, to name a few.

Data and Analytics

Technology is becoming increasingly important in education, but budgets remain limited. While only a little more than half of higher education institutions expect IT budgets to increase, the increase is only about two percent. In comparison, 48 percent of higher education institutions expect budgets to stay the same or decrease, according to Gartner’s higher education predictions for 2024. That means data and analytics will be crucial to helping select the right tools for each learning environment and proving their effectiveness.

1EdTech members are already using interoperability standards to see how their tools are being used, support student success and assess course impacts, but there is more to do.

Learning Tools Interoperability (LTI), OneRoster and Edu-API allow for the secure flow of data between various tools and systems, while the Caliper Analytics standard makes that data more accessible and easier to analyze. Members are working to break down silos across institutions and increase data insights and analysis to benefit teaching and learning in their institutions.

Additional Resources:

A coalition of leading institutions is also championing LTI Advantage Data to provide real-time information covering progress within courses, assessment results and product usage.

Generative AI

Finally, there is no question that generative AI is causing excitement, confusion and anxiety, but it does have the potential to improve teaching and learning if done right. Everyone has a different understanding and ability to start implementing AI in their ecosystems.

The 1EdTech community already started establishing guidance and tools to help with the TrustEd Apps Generative AI Data Rubric and the AI Preparedness Checklist, and the conversation will continue with members discussing how they are implementing these tools, as well as practice prompts for educators.

In the end, these three themes boil down to one thing: We need to build an ecosystem that works best for all educators and supports learners. That’s why it’s so important to bring everyone together, including educators from both K-12 and higher education, edtech suppliers, non-profits and government organizations, to ensure the solutions we build benefit all.

These conversations and the work will continue at 1EdTech’s 2024 Learning Impact Conference, June 3-6 in Salt Lake City, Utah. Educators and edtech innovators will discuss how they’re addressing these issues, what works and what doesn’t, and consider where we need to go next.

© Image Credit: Drazen Zigic / Shutterstock

3 Things Educators and Edtech Suppliers Need to Talk About
❌