Reading view

There are new articles available, click to refresh the page.

OpenAI loses another lead safety researcher, Lilian Weng

Another one of OpenAI’s lead safety researchers, Lilian Weng, announced on Friday she is departing the startup. Weng served as VP of research and safety since August, and before that, was the head of the OpenAI’s safety systems team. In a post on X, Weng said that “after 7 years at OpenAI, I feel ready […]

© 2024 TechCrunch. All rights reserved. For personal use only.

‘Whatever you want Ben’: Inside Ben Horowitz’s cozy relationship with the Las Vegas Police Department

When Skydio, a young maker of drones in San Mateo, California, sent a customer proposal in 2023 to the Las Vegas Metropolitan Police Department, its chief of staff, Mike Gennaro, forwarded the email to VC Ben Horowitz. “Which deployment are you looking to do?” Horowitz wrote back. “Whatever you want, Ben,” Gennaro replied, according to […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Discord terrorist known as “Rabid” gets 30 years for preying on kids

A Michigan man who ran chat rooms and Discord servers targeting children playing online games and coercing them into self-harm, sexually explicit acts, suicide, and other violence was sentenced to 30 years in prison Thursday.

According to the US Department of Justice, Richard Densmore was a member of an online terrorist network called 764, which the FBI considers a "tier one" terrorist threat. He pled guilty to sexual exploitation of a child as "part of a broader indictment that charged him with other child exploitation offenses." In the DOJ's press release, FBI Director Christopher Wray committed to bring to justice any abusive groups known to be preying on vulnerable kids online.

“This defendant orchestrated a community to target children through online gaming sites and used extortion and blackmail to force his minor victims to record themselves committing acts of self-harm and violence,” Wray said. “If you prey on children online, you can’t hide behind a keyboard. The FBI will use all our resources and authorities to arrest you and hold you accountable.”

Read full article

Comments

© NurPhoto / Contributor | NurPhoto

Australia looks to ban social media for kids under age 16

Australian Prime Minister Anthony Albanese on Wednesday announced plans in the country to ban social media for children under 16, saying that “social media is doing harm to our kids, and I’m calling time on it.” The proposed legislation will enter parliament this year, taking effect a year after lawmakers ratify it, said Albanese, adding […]

© 2024 TechCrunch. All rights reserved. For personal use only.

AI safety advocates tell founders to slow down

“Move cautiously and red-team things” is sadly not as catchy as “move fast and break things.” But three AI safety advocates made it clear to startup founders that going too fast can lead to ethical issues in the long run. “We are at an inflection point where there are tons of resources being moved into […]

© 2024 TechCrunch. All rights reserved. For personal use only.

CTGT aims to make AI models safer

Growing up as an immigrant, Cyril Gorlla taught himself how to code — and practiced as if a man possessed. “I aced my mother’s community college programming course at 11, amidst periodically disconnected household utilities,” he told TechCrunch. In high school, Gorlla learned about AI, and became so obsessed with the idea of training his […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Lawsuit: Chatbot that allegedly caused teen’s suicide is now more dangerous for kids

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI's hyper-realistic chatbots—with a limited version available for free or a "supercharged" version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to directly spur Sewell to develop suicidal thoughts. Within a year, Setzer "died by a self-inflicted gunshot wound to the head," a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Read full article

Comments

© via Center for Humane Technology

Lawsuit: City cameras make it impossible to drive anywhere without being tracked

Police use of automated license-plate reader cameras is being challenged in a lawsuit alleging that the cameras enable warrantless surveillance in violation of the Fourth Amendment. The city of Norfolk, Virginia, was sued yesterday by plaintiffs represented by the Institute for Justice, a nonprofit public-interest law firm.

Norfolk, a city with about 238,000 residents, "has installed a network of cameras that make it functionally impossible for people to drive anywhere without having their movements tracked, photographed, and stored in an AI-assisted database that enables the warrantless surveillance of their every move. This civil rights lawsuit seeks to end this dragnet surveillance program," said the complaint filed in US District Court for the Eastern District of Virginia.

Like many other cities, Norfolk uses cameras made by the company Flock Safety. A 404 Media article said Institute for Justice lawyer Robert Frommer "told 404 Media that the lawsuit could have easily been filed in any of the more than 5,000 communities where Flock is active, but that Norfolk made sense because the Fourth Circuit of Appeals—which Norfolk is part of—recently held that persistent, warrantless drone surveillance in Baltimore is unconstitutional under the Fourth Amendment in a case called Beautiful Struggle v Baltimore Police Department."

Read full article

Comments

Qualla Kids Pickup System

Here’s a cool tool that confronts the problem posed by this question: Are you sure who is picking up your kids at leaving time? It’s a necessity, and as far as schools go, it’s a bit of an historical problem. The challenge: To respond to all the factors involved in this process in the simplest, most workable way possible.

  • Families: Parents’ time tables may not be the same as their kids, therefore they have to ask third parties.
  • Teachers: If they want to be agile, they must remember last minute changes, whatsapps, mails, calls, familiy circumstancies, and on and on.
  • Schools: While well-intentioned, they are not registering these kinds of transactions.

The goal of Qualla is to simplify this process in a way that is as workable, practical, and as efficient as possible, making the school pickup process more effective, easier and safer.

Therefore, they created an app that addresses all of these factors in just a click. Testing this with the market, the people behind Qualla realized that this method was valid for several other functionalities, and today they solve such complex processes as: canteens, school bus, authorizations, arrivals, pickups and more, and with an extended road map.

With an agile one-click solution providing an easy user interface with no learning needed— and a secure interface where each transaction is automatically registered — has allowed Qualla Kids Pickup System to differentiate themselves and establish relationships with other trusted partners. From September of 2022, they’ve recorded more than 1.3 Million transactions, over 20,000 users and a satisfaction rate of 98%. For these reasons and more, Qualla earned a Cool Tool Award for Best Communication Solution (Finalist) as part of The EdTech Awards 2024. Learn more

The post Qualla Kids Pickup System appeared first on EdTech Digest.

A New Type of Neural Network Is More Interpretable



Artificial neural networks—algorithms inspired by biological brains—are at the center of modern artificial intelligence, behind both chatbots and image generators. But with their many neurons, they can be black boxes, their inner workings uninterpretable to users.

Researchers have now created a fundamentally new way to make neural networks that in some ways surpasses traditional systems. These new networks are more interpretable and also more accurate, proponents say, even when they’re smaller. Their developers say the way they learn to represent physics data concisely could help scientists uncover new laws of nature.

“It’s great to see that there is a new architecture on the table.” —Brice Ménard, Johns Hopkins University

For the past decade or more, engineers have mostly tweaked neural-network designs through trial and error, says Brice Ménard, a physicist at Johns Hopkins University who studies how neural networks operate but was not involved in the new work, which was posted on arXiv in April. “It’s great to see that there is a new architecture on the table,” he says, especially one designed from first principles.

One way to think of neural networks is by analogy with neurons, or nodes, and synapses, or connections between those nodes. In traditional neural networks, called multi-layer perceptrons (MLPs), each synapse learns a weight—a number that determines how strong the connection is between those two neurons. The neurons are arranged in layers, such that a neuron from one layer takes input signals from the neurons in the previous layer, weighted by the strength of their synaptic connection. Each neuron then applies a simple function to the sum total of its inputs, called an activation function.

black text on a white background with red and blue lines connecting on the left and black lines connecting on the right In traditional neural networks, sometimes called multi-layer perceptrons [left], each synapse learns a number called a weight, and each neuron applies a simple function to the sum of its inputs. In the new Kolmogorov-Arnold architecture [right], each synapse learns a function, and the neurons sum the outputs of those functions.The NSF Institute for Artificial Intelligence and Fundamental Interactions

In the new architecture, the synapses play a more complex role. Instead of simply learning how strong the connection between two neurons is, they learn the full nature of that connection—the function that maps input to output. Unlike the activation function used by neurons in the traditional architecture, this function could be more complex—in fact a “spline” or combination of several functions—and is different in each instance. Neurons, on the other hand, become simpler—they just sum the outputs of all their preceding synapses. The new networks are called Kolmogorov-Arnold Networks (KANs), after two mathematicians who studied how functions could be combined. The idea is that KANs would provide greater flexibility when learning to represent data, while using fewer learned parameters.

“It’s like an alien life that looks at things from a different perspective but is also kind of understandable to humans.” —Ziming Liu, Massachusetts Institute of Technology

The researchers tested their KANs on relatively simple scientific tasks. In some experiments, they took simple physical laws, such as the velocity with which two relativistic-speed objects pass each other. They used these equations to generate input-output data points, then, for each physics function, trained a network on some of the data and tested it on the rest. They found that increasing the size of KANs improves their performance at a faster rate than increasing the size of MLPs did. When solving partial differential equations, a KAN was 100 times as accurate as an MLP that had 100 times as many parameters.

In another experiment, they trained networks to predict one attribute of topological knots, called their signature, based on other attributes of the knots. An MLP achieved 78 percent test accuracy using about 300,000 parameters, while a KAN achieved 81.6 percent test accuracy using only about 200 parameters.

What’s more, the researchers could visually map out the KANs and look at the shapes of the activation functions, as well as the importance of each connection. Either manually or automatically they could prune weak connections and replace some activation functions with simpler ones, like sine or exponential functions. Then they could summarize the entire KAN in an intuitive one-line function (including all the component activation functions), in some cases perfectly reconstructing the physics function that created the dataset.

“In the future, we hope that it can be a useful tool for everyday scientific research,” says Ziming Liu, a computer scientist at the Massachusetts Institute of Technology and the paper’s first author. “Given a dataset we don’t know how to interpret, we just throw it to a KAN, and it can generate some hypothesis for you. You just stare at the brain [the KAN diagram] and you can even perform surgery on that if you want.” You might get a tidy function. “It’s like an alien life that looks at things from a different perspective but is also kind of understandable to humans.”

Dozens of papers have already cited the KAN preprint. “It seemed very exciting the moment that I saw it,” says Alexander Bodner, an undergraduate student of computer science at the University of San Andrés, in Argentina. Within a week, he and three classmates had combined KANs with convolutional neural networks, or CNNs, a popular architecture for processing images. They tested their Convolutional KANs on their ability to categorize handwritten digits or pieces of clothing. The best one approximately matched the performance of a traditional CNN (99 percent accuracy for both networks on digits, 90 percent for both on clothing) but using about 60 percent fewer parameters. The datasets were simple, but Bodner says other teams with more computing power have begun scaling up the networks. Other people are combining KANs with transformers, an architecture popular in large language models.

One downside of KANs is that they take longer per parameter to train—in part because they can’t take advantage of GPUs. But they need fewer parameters. Liu notes that even if KANs don’t replace giant CNNs and transformers for processing images and language, training time won’t be an issue at the smaller scale of many physics problems. He’s looking at ways for experts to insert their prior knowledge into KANs—by manually choosing activation functions, say—and to easily extract knowledge from them using a simple interface. Someday, he says, KANs could help physicists discover high-temperature superconductors or ways to control nuclear fusion.

❌