Reading view

There are new articles available, click to refresh the page.

US can’t ban TikTok for security reasons while ignoring Temu, other apps, TikTok argues

Andrew J. Pincus, attorney for TikTok and ByteDance, leaves the E. Barrett Prettyman US Court House with members of his legal team as the US Court of Appeals hears oral arguments in the case <em>TikTok Inc. v. Merrick Garland</em> on September 16 in Washington, DC.

Enlarge / Andrew J. Pincus, attorney for TikTok and ByteDance, leaves the E. Barrett Prettyman US Court House with members of his legal team as the US Court of Appeals hears oral arguments in the case TikTok Inc. v. Merrick Garland on September 16 in Washington, DC. (credit: Kevin Dietsch / Staff | Getty Images News)

The fight to keep TikTok operating unchanged in the US reached an appeals court Monday, where TikTok and US-based creators teamed up to defend one of the world's most popular apps from a potential US ban.

TikTok lawyer Andrew Pincus kicked things off by warning a three-judge panel that a law targeting foreign adversaries that requires TikTok to divest from its allegedly China-controlled owner, ByteDance, is "unprecedented" and could have "staggering" effects on "the speech of 170 million Americans."

Pincus argued that the US government was "for the first time in history" attempting to ban speech by a specific US speaker—namely, TikTok US, the US-based entity that allegedly curates the content that Americans see on the app.

Read 23 remaining paragraphs | Comments

Cloud Monitor by ManagedMethods

Cloud Monitor by ManagedMethods is a cloud security solution specifically tailored for technology teams working in the education market. As schools strive to create safe and enriching digital learning environments for students and educators alike, Cloud Monitor stands as the vanguard of cloud security, ensuring data protection and privacy in the ever-evolving edtech space.

At the forefront of innovation, Cloud Monitor harnesses the power of AI-driven technology to provide visibility and control into cloud applications, thereby detecting and thwarting potential security threats. As education embraces the cloud to foster collaborative and flexible learning experiences, safeguarding sensitive student and financial data is critical. Cloud Monitor empowers districts to fulfill their duty of care, securing data while fostering a climate of trust among students, educators, and parents.

Implementing Cloud Monitor is easy, thanks to its user-friendly design and seamless integration with Google Workspace and Microsoft 365, it requires minimal training. It’s automated alerts and remediation capabilities enables swift action against any potential breaches, malware attacks, or student safety risks, freeing district technology teams to focus on the million other things they have on their to-do list.

With the ever-evolving data privacy landscape, adhering to industry standards and regulations such as COPPA, FERPA, and a variety of state-level regulations is a must. Cloud Monitor assists districts in maintaining compliance, monitoring data, and detecting policy violations.

Cloud Monitor by ManagedMethods is the ultimate guardian of cloud security and safety for K-12 schools, empowering districts to create safe, secure, and compliant cloud learning environments. For these reasons and more, Cloud Monitor by ManagedMethods is a Cool Tool Award Winner for “Best Security (Cybersecurity, Student safety) Solution” as part of The EdTech Awards 2024 from EdTech Digest. Learn more.

The post Cloud Monitor by ManagedMethods appeared first on EdTech Digest.

An Education Chatbot Company Collapsed. Where Did the Student Data Go?

When Los Angeles Unified School District launched a districtwide AI chatbot nicknamed “Ed” in March, officials boasted that it represented a revolutionary new tool that was only possible thanks to generative AI — a personal assistant that could point each student to tailored resources and assignments and playfully nudge and encourage them to keep going.

But last month, just a few months after the fanfare of the public launch event, the district abruptly shut down its Ed chatbot, after the company it contracted to build the system, AllHere Education, suddenly furloughed most of its staff citing financial difficulties. The company had raised more than $12 million in venture capital, and its five-year contract with the LA district was for about $6 million over five years, about half of which the company had already been paid.

It’s not yet clear what happened: LAUSD officials declined interview requests from EdSurge, and officials from AllHere did not respond to requests for comment about the company’s future. A statement issued by the school district said “several educational technology companies are interested in acquiring” AllHere to continue its work, though nothing concrete has been announced.

A tech leader for the school district, which is the nation’s second-largest, told the Los Angeles Times that some information in the Ed system is still available to students and families, just not in chatbot form. But it was the chatbot that was touted as the key innovation — which relied on human moderators at AllHere to monitor some of the chatbot’s output who are no longer actively working on the project.

Some edtech experts contacted by EdSurge say that the implosion of the cutting-edge AI tool offers lessons for other schools and colleges working to make use of generative AI. Most of those lessons, they say, center on a factor that is more difficult than many people realize: the challenges of corralling and safeguarding data.

An Ambitious Attempt to Link Systems

When leaders from AllHere gave EdSurge a demo of the Ed chatbot in March, back when the company seemed thriving and had recently been named to a Time magazine list of the “World’s Top Edtech Companies of 2024,” company leaders were most proud of how the chatbot cut across dozens of tech tools that the school system uses.

“The first job of Ed was, how do you create one unified learning space that brings together all the digital tools, and that eliminates the high number of clicks that otherwise the student would need to navigate through them all?” the company’s then-CEO, Joanna Smith-Griffin, said at the time. (The LAUSD statement said she is no longer with the company.)

Such data integration had not previously been a focus of the company, though. The company’s main expertise was making chatbots that were “designed to mimic real conversations, responding with empathy or humor depending on the student's needs in the moment on an individual level,” according to its website.

Michael Feldstein, a longtime edtech consultant, said that from the first time he heard about the Ed chatbot, he saw the project as too ambitious for a small startup to tackle.

“In order to do the kind of work that they were promising, they needed to gather information about students from many IT systems,” he said. “This is the well-known hard part of edtech.”

Feldstein guesses that to make a chatbot that could seamlessly take data from nearly every critical learning resource at a school, as announced at the splashy press conference in March, it could take 10 times the amount AllHere was being paid.

“There’s no evidence that they had experience as system integrators,” he said of AllHere. “It’s not clear that they had the expertise.”

In fact, a former engineer from AllHere reportedly sent emails to leaders in the school district warning that the company was not handling student data according to best practices of privacy protection, according to an article in The 74, the publication that first reported the implosion of AllHere. The official, Chris Whiteley, reportedly told state and district officials that the way the Ed chatbot handled student records put the data at risk of getting hacked. (The school district’s statement defends its privacy practices, saying that: “Throughout the development of the Ed platform, Los Angeles Unified has closely reviewed the platform to ensure compliance with applicable privacy laws and regulations, as well as Los Angeles Unified’s own data security and privacy policies, and AllHere is contractually obligated to do the same.”)

LAUSD’s data systems have recently faced breaches that appear unrelated to the Ed chatbot project. Last month hackers claimed to be selling troves of millions of records from LAUSD on the dark web for $1,000. And a data breach of a data warehouse provider used by LAUSD, Snowflake, claims to have snatched records of millions of students, including from the district. A more recent breach of Snowflake may have affected LAUSD or other tech companies it works with as well.

“LAUSD maintains an enormous amount of sensitive data. A breach of an integrated data system of LAUSD could affect a staggering number of individuals,” said Doug Levin, co-founder and national director of the K12 Security Information eXchange, in an email interview. He said he is waiting for the district to share more information about what happened. “I am mostly interested in understanding whether any of LAUSD’s edtech vendors were breached and — if so — if other customers of those vendors are at risk,” he said. “This would make it a national issue.”

Meanwhile, what happens to all the student data in the Ed chatbot?

According to the statement released by LAUSD: “Any student data belonging to the District and residing in the Ed platform will continue to be subject to the same privacy and data security protections, regardless of what happens to AllHere as a company.”

A copy of the contract between AllHere and LAUSD, obtained by EdSurge under a public records request, does indicate that all data from the project “will remain the exclusive property of LAUSD.” And the contract contains a provision stating that AllHere “shall delete a student’s covered information upon request of the district.”

Related document: Contract between LAUSD and AllHere Education.

Rob Nelson, executive director for academic technology and planning at the University of Pennsylvania, said the situation does create fresh risks, though.

“Are they taking appropriate technical steps to make sure that data is secure and there won’t be a breach or something intentional by an employee?” Nelson wondered.

Lessons Learned

James Wiley, a vice president at the education market research firm ListEdTech, said he would have advised AllHere to seek a partner with experience wrangling and managing data.

When he saw a copy of the contract between the school district and AllHere, he said his reaction was, “Why did you sign up for this?,” adding that “some of the data you would need to do this chatbot isn’t even called out in the contract.”

Wiley said that school officials may not have understood how hard it was to do the kind of data integration they were asking for. “I think a lot of times schools and colleges don’t understand how complex their data structure is,” he added. “And you’re assuming a vendor is going to come in and say, ‘It’s here and here.’” But he said it is never that simple.

“Building the Holy Grail of a data-informed, personalized achievement tool is a big job,” he added. “It’s a noble cause, but you have to realize what you have to do to get there.”

For him, the biggest lesson for other schools and colleges is to take a hard look at their data systems before launching a big AI project.

“It’s a cautionary tale,” he concluded. “AI is not going to be a silver bullet here. You’re still going to have to get your house in order before you bring AI in.”

To Nelson, of the University of Pennsylvania, the larger lesson in this unfolding saga is that it’s too soon in the development of generative AI tools to scale up one idea to a whole school district or college campus.

Instead of one multimillion-dollar bet, he said, “let’s invest $10,000 in five projects that are teacher-based, and then listen to what the teachers have to say about it and learn what these tools are going to do well.”

© Thomas Bethge / Shutterstock

An Education Chatbot Company Collapsed. Where Did the Student Data Go?

This $90M Education Research Project Is Banking on Data Privacy to Drive Insights

With digital education platforms generating data on how millions of students are learning, they are also sitting on veritable information gold mines for researchers who are trying to improve education.

An ethical and legal conundrum stands in the way: how to responsibly share that data without opening students up to the possibility of having their personal information exposed to outside parties.

Now a consortium of education researchers and learning platforms are developing what they hope is a solution — researchers will never see the actual data.

The project dubbed SafeInsights, helmed by OpenStax at Rice University, is supported by a $90 million grant from the National Science Foundation over five years.

The idea is for SafeInsights to serve as a bridge between its learning platform and research partners, alongside collaborators helping flesh out how the exchange will work to safeguard student privacy.

“In a normal situation, you end up taking data from learning websites and apps and giving it to researchers for them to study and for them to analyze it to learn from,” JP Slavinsky, SafeInsights executive director and OpenStax technical director, says. “Instead, we're taking the researchers’ questions to that data. This creates a safer environment for research that's easier for schools and platforms to participate in, because the data is staying where it is already.”

Deeper Insights on a Large Scale

Another way to think of SafeInsights is as a telescope, say Slavinsky and his colleague Richard Baraniuk, the founder and director of OpenStax, which publishes open access course materials. It will allow researchers to peer into the vast amount of data from learning platforms like the University of Pennsylvania’s Massive Online Open Courses and Quill.org for districts that opt-in to the platform.

Researchers would develop questions — then transform those questions into computer code that can sift through the data — to be delivered to learning platforms. After the results are generated, they would be returned to researchers without the data ever having to be directly shared.

“It is really a partnership where we have researchers coming together with schools and platforms, and we're jointly trying to solve some problems of interest,” Slavinsky says. “We are providing that telescope for others to bring their research agenda and the questions they want to answer. So we're less involved on what specifically is going to be asked and more on making as many questions as possible answerable.”

Part of why this model would be so powerful is how it would increase the scale at which education research is done, Baraniuk says. There are plenty of studies that have small sample sizes of about 50 college students, he explains, who participate as part of a psychology class.

“A lot of the studies are about freshman college kids, right? Well, that's not representative of the huge breadth of different students,” Baraniuk says. “The only way you're gonna be able to see that breadth is by doing large studies, so really the first key behind SafeInsights is partnering with these digital education websites and apps who host literally millions of students every day.”

Another aspect where he sees the project opening new doors for researchers is the diversity of the student populations represented by the learning platform partners, which include education apps for reading, writing and science along with learning management systems.

“By putting together all of these puzzle pieces, the idea is that we can — at a very large scale — get to see a more complete picture of these students,” Baraniuk says. “The big goal of ours is to try to remove as much friction as possible so that more useful research can happen, and then more research-backed pedagogies and teaching techniques can actually get applied. But while removing that friction, how do we keep everything really safeguarded?”

Creating Trust, Protecting Privacy

Before any research takes place, SafeInsights partners at the Future of Privacy Forum are helping develop the policies that will shape how the program guards students’ data.

John Verdi, the Future of Privacy Forum’s senior vice president for policy, says the goal is to have privacy protections baked into how everything operates. Part of that is helping to develop what he calls the “data enclave,” or the process by which researchers can query a learning platform’s data without having direct access. Other aspects include helping develop the review process for how research projects are selected, training researchers on privacy and publishing lessons learned about operating with privacy at the forefront.

“Even if you have great technical safeguards in place, even if you do great ethical vetting,” he says about the training aspect, “at the end of the day, researchers themselves have decisions to make about how to responsibly use the system. They need to understand how the system works.”

The protection of student data privacy in education is generally “woefully under-funded,” he says, but it’s safeguarding that information that allows students to trust learning platforms — and ultimately create research opportunities like SafeInsights.

“Tasking students and parents to protect data is the wrong place to put that responsibility,” Verdi says. “Instead, what we need to do is build digital infrastructure that is privacy respectful by default, and [that] provides assurances that information will be kept confidential and used ethically.”

© Alphavector / Shutterstock

This $90M Education Research Project Is Banking on Data Privacy to Drive Insights

9 op de 10 Nederlanders neemt stappen om persoonlijke data op internet te beschermen

Vrouw, databescherming

Hoe verder we digitaliseren, hoe meer data we online hebben staan. Vorig jaar zei 89% van de Nederlanders van 12 jaar of ouder maatregelen te treffen om die data te beschermen. Zij nemen diverse soorten stappen voor databescherming, zoals het beperken van toegang tot profielgegevens op sociale media en de veiligheid van websites controleren.

Het aantal mensen in de Nederlandse bevolking dat maatregelen neemt om zijn online data te beschermen, is vorig jaar iets gestegen. Waar dat in 2023 89% was, was dat in 2021 nog 82%, blijkt uit cijfers van het CBS. Nederlanders beperken nu vooral vaker toegang tot profielgegevens: 70,3% doet dat nu, tegenover 54,8% in 2021.

De meest genomen maatregel is echter nog altijd het beperken van toegang tot locatiegegevens, wat 77,9% nu doet. De minst genomen maatregel is het laten verwijderen van gegevens: slechts 14,2% van de Nederlanders deed dat in 2023.

Maatregelen tegen bescherming van persoonsgegevens op internet

Bron: CBS

75-plussers beschermen het minst

De cijfers van het CBS laten flinke verschillen tussen leeftijdscategorieën zien wat betreft databescherming. Van de groep Nederlanders tussen de 12 en 65 jaar is 93% actief bezig met het beschermen van zijn persoonlijke informatie. Zeker de groep tussen de 18 en 25 jaar doet veel aan databescherming. In die groep gaat het om maar liefst 95,8%.

Onder ouderen is echter een heel ander beeld zichtbaar. Van de Nederlanders tussen de 65 en 75 jaar is 84,8% actief bezig met databescherming. Onder 75-plussers is dat zelfs maar 60%. Maar dit kan natuurlijk ook te maken hebben met dat ouderen überhaupt minder vaak op het internet zitten dan jongeren en daar dus ook minder persoonlijke data hebben staan.

Nederland koploper databescherming

Over het algemeen genomen is Nederland wel koploper in de beveiliging van persoonsgegevens op het internet. Gemiddeld neemt slechts 67% van de Europeanen maatregelen om zin data te beschermen. In Nederland heeft tegelijkertijd ook het grootste aantal mensen toegang tot het internet in Europa, namelijk 81%.

Foto: Shutterstock

Lees 9 op de 10 Nederlanders neemt stappen om persoonlijke data op internet te beschermen verder op Numrush

❌