Normal view
OpenAI Scored a Legal Win Over Progressive Publishers—but the Fight’s Not Finished
- EdSurge Articles
- New AI Tools Are Promoted as Study Aids for Students. Are They Doing More Harm Than Good?
New AI Tools Are Promoted as Study Aids for Students. Are They Doing More Harm Than Good?
Once upon a time, educators worried about the dangers of CliffsNotes — study guides that rendered great works of literature as a series of bullet points that many students used as a replacement for actually doing the reading.
Today, that sure seems quaint.
Suddenly, new consumer AI tools have hit the market that can take any piece of text, audio or video and provide that same kind of simplified summary. And those summaries aren’t just a series of quippy text in bullet points. These days students can have tools like Google’s NotebookLM turn their lecture notes into a podcast, where sunny-sounding AI bots banter and riff on key points. Most of the tools are free, and do their work in seconds with the click of a button.
Naturally, all this is causing concern among some educators, who see students off-loading the hard work of synthesizing information to AI at a pace never before possible.
But the overall picture is more complicated, especially as these tools become more mainstream and their use starts to become standard in business and other contexts beyond the classroom.
And the tools serve as a particular lifeline for neurodivergent students, who suddenly have access to services that can help them get organized and support their reading comprehension, teaching experts say.
“There’s no universal answer,” says Alexis Peirce Caudell, a lecturer in informatics at Indiana University at Bloomington who recently did an assignment where many students shared their experience and concerns about AI tools. “Students in biology are going to be using it in one way, chemistry students are going to be using it in another. My students are all using it in different ways.”
It’s not as simple as assuming that students are all cheaters, the instructor stresses.
“Some students were concerned about pressure to engage with tools — if all of their peers were doing it that they should be doing it even if they felt it was getting in the way of their authentically learning,” she says. They are asking themselves questions like, “Is this helping me get through this specific assignment or this specific test because I’m trying to navigate five classes and applications for internships” — but at the cost of learning?
It all adds new challenges to schools and colleges as they attempt to set boundaries and policies for AI use in their classrooms.
Need for ‘Friction’
It seems like just about every week -— or even every day — tech companies announce new features that students are adopting in their studies.
Just last week, for instance, Apple released Apple Intelligence features for iPhones, and one of the features can recraft any piece of text to different tones, such as casual or professional. And last month ChatGPT-maker OpenAI released a feature called Canvas that includes slider bars for users to instantly change the reading level of a text.
Marc Watkins, a lecturer of writing and rhetoric at the University of Mississippi, says he is worried that students are lured by the time-saving promises of these tools and may not realize that using them can mean skipping the actual work it takes to internalize and remember the material.
Get EdSurge journalism delivered free to your inbox. Sign up for our newsletters.
“From a teaching, learning standpoint, that's pretty concerning to me,” he says. “Because we want our students to struggle a little bit, to have a little bit of friction, because that's important for their learning.”
And he says new features are making it harder for teachers to encourage students to use AI in helpful ways — like teaching them how to craft prompts to change the writing level of something: “It removes that last level of desirable difficulty when they can just button mash and get a final draft and get feedback on the final draft, too.”
Even professors and colleges that have adopted AI policies may need to rethink them in light of these new types of capabilities.
As two professors put it in a recent op-ed, “Your AI Policy Is Already Obsolete.”
“A student who reads an article you uploaded, but who cannot remember a key point, uses the AI assistant to summarize or remind them where they read something. Has this person used AI when there was a ban in the class?” ask the authors, Zach Justus, director of faculty development at California State University, Chico, and Nik Janos, a professor of sociology there. They note that popular tools like Adobe Acrobat now have “AI assistant” features that can summarize documents with the push of a button. “Even when we are evaluating our colleagues in tenure and promotion files,” the professors write, “do you need to promise not to hit the button when you are plowing through hundreds of pages of student evaluations of teaching?”
Instead of drafting and redrafting AI policies, the professors argue that educators should work out broad frameworks for what is acceptable help from chatbots.
But Watkins calls on the makers of AI tools to do more to mitigate the misuse of their systems in academic settings, or as he put it when EdSurge talked with him, “to make sure that this tool that is being used so prominently by students [is] actually effective for their learning and not just as a tool to offload it.”
Uneven Accuracy
These new AI tools raise a host of new challenges beyond those at play when printed CliffsNotes were the study tool du jour.
One is that AI summarizing tools don’t always provide accurate information, due to a phenomenon of large language models known as “hallucinations,” when chatbots guess at facts but present them to users as sure things.
When Bonni Stachowiak first tried the podcast feature on Google’s NotebookLM, for instance, she said she was blown away by how lifelike the robot voices sounded and how well they seemed to summarize the documents she fed it. Stachowiak is the host of the long-running podcast, Teaching in Higher Ed, and dean of teaching and learning at Vanguard University of Southern California, and she regularly experiments with new AI tools in her teaching.
But as she tried the tool more, and put in documents on complex subjects that she knew well, she noticed occasional errors or misunderstandings. “It just flattens it — it misses all of this nuance,” she says. “It sounds so intimate because it’s a voice and audio is such an intimate medium. But as soon as it was something that you knew a lot about it’s going to fall flat.”
Even so, she says she has found the podcasting feature of NotebookLM useful in helping her understand and communicate bureaucratic issues at her university — such as turning part of the faculty handbook into a podcast summary. When she checked it with colleagues who knew the policies well, she says they felt it did a “perfectly good job.” “It is very good at making two-dimensional bureaucracy more approachable,” she says.
Peirce Caudell, of Indiana University, says her students have raised ethical issues with using AI tools as well.
“Some say they’re really concerned about the environmental costs of generative AI and the usage,” she says, noting that ChatGPT and other AI models require large amounts of computing power and electricity.
Others, she adds, worry about how much data users end up giving AI companies, especially when students use free versions of the tools.
“We're not having that conversation,” she says. “We're not having conversations about what does it mean to actively resist the use of generative AI?”
Even so, the instructor is seeing positive impacts for students, such as when they use a tool to help make flashcards to study.
And she heard about a student with ADHD who had always found reading a large text “overwhelming,” but was using ChatGPT “to get over the hurdle of that initial engagement with the reading and then they were checking their understanding with the use of ChatGPT.”
And Stachowiak says she has heard of other AI tools that students with intellectual disabilities are using, such as one that helps users break down large tasks into smaller, more manageable sub-tasks.
“This is not cheating,” she stresses. “It’s breaking things down and estimating how long something is going to take. That is not something that comes naturally for a lot of people.”
Recap: Our “AI in DC” conference was great—here’s what you missed
Ars Technica descended in force last week upon our nation's capital, setting up shop in the International Spy Museum for a three-panel discussion on artificial intelligence, infrastructure, security, and how compliance with policy changes over the next decade or so might shape the future of business computing in all its forms. Much like our San Jose event last month, the venue was packed to the rafters with Ars readers eager for knowledge (and perhaps some free drinks, which is definitely why I was there!). A bit over 200 people were eventually herded into one of the conference spaces in the venue's upper floors, and Ars Editor-in-Chief Ken Fisher hopped on stage to take us in.
"Today's event about privacy, compliance, and making infrastructure smarter, I think, could not be more perfectly timed," said Fisher. "I don't know about your orgs, but I know Ars Technica and our parent company, Condé Nast, are currently thinking about generative AI and how it touches almost every aspect or could touch almost every aspect of our business."
Fisher continued: "I think the media talks about how [generative AI] is going to maybe write news and take over content, but the reality is that generative AI has a lot of potential to help us in finance, to help us with opex, to help us with planning—to help us with pretty much every aspect of our business and in our business. And from what I'm reading online, many folks are starting to have this dream that generative AI is going to lead them into a world where they can replace a lot of SaaS services where they can make a pivot to first-party data."
Can Language Models Really Understand? Study Uncovers Limits in AI Logic
Why AI could eat quantum computing’s lunch
Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics.
Those expectations have been especially high in physics and chemistry, where the weird effects of quantum mechanics come into play. In theory, this is where quantum computers could have a huge advantage over conventional machines.
But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all.
The scale and complexity of quantum systems that can be simulated using AI is advancing rapidly, says Giuseppe Carleo, a professor of computational physics at the Swiss Federal Institute of Technology (EPFL). Last month, he coauthored a paper published in Science showing that neural-network-based approaches are rapidly becoming the leading technique for modeling materials with strong quantum properties. Meta also recently unveiled an AI model trained on a massive new data set of materials that has jumped to the top of a leaderboard for machine-learning approaches to material discovery.
Given the pace of recent advances, a growing number of researchers are now asking whether AI could solve a substantial chunk of the most interesting problems in chemistry and materials science before large-scale quantum computers become a reality.
“The existence of these new contenders in machine learning is a serious hit to the potential applications of quantum computers,” says Carleo “In my opinion, these companies will find out sooner or later that their investments are not justified.”
Exponential problems
The promise of quantum computers lies in their potential to carry out certain calculations much faster than conventional computers. Realizing this promise will require much larger quantum processors than we have today. The biggest devices have just crossed the thousand-qubit mark, but achieving an undeniable advantage over classical computers will likely require tens of thousands, if not millions. Once that hardware is available, though, a handful of quantum algorithms, like the encryption-cracking Shor’s algorithm, have the potential to solve problems exponentially faster than classical algorithms can.
But for many quantum algorithms with more obvious commercial applications, like searching databases, solving optimization problems, or powering AI, the speed advantage is more modest. And last year, a paper coauthored by Microsoft’s head of quantum computing, Matthias Troyer, showed that these theoretical advantages disappear if you account for the fact that quantum hardware operates orders of magnitude slower than modern computer chips. The difficulty of getting large amounts of classical data in and out of a quantum computer is also a major barrier.
So Troyer and his colleagues concluded that quantum computers should instead focus on problems in chemistry and materials science that require simulation of systems where quantum effects dominate. A computer that operates along the same quantum principles as these systems should, in theory, have a natural advantage here. In fact, this has been a driving idea behind quantum computing ever since the renowned physicist Richard Feynman first proposed the idea.
The rules of quantum mechanics govern many things with huge practical and commercial value, like proteins, drugs, and materials. Their properties are determined by the interactions of their constituent particles, in particular their electrons—and simulating these interactions in a computer should make it possible to predict what kinds of characteristics a molecule will exhibit. This could prove invaluable for discovering things like new medicines or more efficient battery chemistries, for example.
But the intuition-defying rules of quantum mechanics—in particular, the phenomenon of entanglement, which allows the quantum states of distant particles to become intrinsically linked—can make these interactions incredibly complex. Precisely tracking them requires complicated math that gets exponentially tougher the more particles are involved. That can make simulating large quantum systems intractable on classical machines.
This is where quantum computers could shine. Because they also operate on quantum principles, they are able to represent quantum states much more efficiently than is possible on classical machines. They could also take advantage of quantum effects to speed up their calculations.
But not all quantum systems are the same. Their complexity is determined by the extent to which their particles interact, or correlate, with each other. In systems where these interactions are strong, tracking all these relationships can quickly explode the number of calculations required to model the system. But in most that are of practical interest to chemists and materials scientists, correlation is weak, says Carleo. That means their particles don’t affect each other’s behavior significantly, which makes the systems far simpler to model.
The upshot, says Carleo, is that quantum computers are unlikely to provide any advantage for most problems in chemistry and materials science. Classical tools that can accurately model weakly correlated systems already exist, the most prominent being density functional theory (DFT). The insight behind DFT is that all you need to understand a system’s key properties is its electron density, a measure of how its electrons are distributed in space. This makes for much simpler computation but can still provide accurate results for weakly correlated systems.
Simulating large systems using these approaches requires considerable computing power. But in recent years there’s been an explosion of research using DFT to generate data on chemicals, biomolecules, and materials—data that can be used to train neural networks. These AI models learn patterns in the data that allow them to predict what properties a particular chemical structure is likely to have, but they are orders of magnitude cheaper to run than conventional DFT calculations.
This has dramatically expanded the size of systems that can be modeled—to as many as 100,000 atoms at a time—and how long simulations can run, says Alexandre Tkatchenko, a physics professor at the University of Luxembourg. “It’s wonderful. You can really do most of chemistry,” he says.
Olexandr Isayev, a chemistry professor at Carnegie Mellon University, says these techniques are already being widely applied by companies in chemistry and life sciences. And for researchers, previously out of reach problems such as optimizing chemical reactions, developing new battery materials, and understanding protein binding are finally becoming tractable.
As with most AI applications, the biggest bottleneck is data, says Isayev. Meta’s recently released materials data set was made up of DFT calculations on 118 million molecules. A model trained on this data achieved state-of-the-art performance, but creating the training material took vast computing resources, well beyond what’s accessible to most research teams. That means fulfilling the full promise of this approach will require massive investment.
Modeling a weakly correlated system using DFT is not an exponentially scaling problem, though. This suggests that with more data and computing resources, AI-based classical approaches could simulate even the largest of these systems, says Tkatchenko. Given that quantum computers powerful enough to compete are likely still decades away, he adds, AI’s current trajectory suggests it could reach important milestones, such as precisely simulating how drugs bind to a protein, much sooner.
Strong correlations
When it comes to simulating strongly correlated quantum systems—ones whose particles interact a lot—methods like DFT quickly run out of steam. While more exotic, these systems include materials with potentially transformative capabilities, like high-temperature superconductivity or ultra-precise sensing. But even here, AI is making significant strides.
In 2017, EPFL’s Carleo and Microsoft’s Troyer published a seminal paper in Science showing that neural networks could model strongly correlated quantum systems. The approach doesn’t learn from data in the classical sense. Instead, Carleo says, it is similar to DeepMind’s AlphaZero model, which mastered the games of Go, chess, and shogi using nothing more than the rules of each game and the ability to play itself.
In this case, the rules of the game are provided by Schrödinger’s equation, which can precisely describe a system’s quantum state, or wave function. The model plays against itself by arranging particles in a certain configuration and then measuring the system’s energy level. The goal is to reach the lowest energy configuration (known as the ground state), which determines the system’s properties. The model repeats this process until energy levels stop falling, indicating that the ground state—or something close to it—has been reached.
The power of these models is their ability to compress information, says Carleo. “The wave function is a very complicated mathematical object,” he says. “What has been shown by several papers now is that [the neural network] is able to capture the complexity of this object in a way that can be handled by a classical machine.”
Since the 2017 paper, the approach has been extended to a wide range of strongly correlated systems, says Carleo, and results have been impressive. The Science paper he published with colleagues last month put leading classical simulation techniques to the test on a variety of tricky quantum simulation problems, with the goal of creating a benchmark to judge advances in both classical and quantum approaches.
Carleo says that neural-network-based techniques are now the best approach for simulating many of the most complex quantum systems they tested. “Machine learning is really taking the lead in many of these problems,” he says.
These techniques are catching the eye of some big players in the tech industry. In August, researchers at DeepMind showed in a paper in Science that they could accurately model excited states in quantum systems, which could one day help predict the behavior of things like solar cells, sensors, and lasers. Scientists at Microsoft Research have also developed an open-source software suite to help more researchers use neural networks for simulation.
One of the main advantages of the approach is that it piggybacks on massive investments in AI software and hardware, says Filippo Vicentini, a professor of AI and condensed-matter physics at École Polytechnique in France, who was also a coauthor on the Science benchmarking paper: “Being able to leverage these kinds of technological advancements gives us a huge edge.”
There is a caveat: Because the ground states are effectively found through trial and error rather than explicit calculations, they are only approximations. But this is also why the approach could make progress on what has looked like an intractable problem, says Juan Carrasquilla, a researcher at ETH Zurich, and another coauthor on the Science benchmarking paper.
If you want to precisely track all the interactions in a strongly correlated system, the number of calculations you need to do rises exponentially with the system’s size. But if you’re happy with an answer that is just good enough, there’s plenty of scope for taking shortcuts.
“Perhaps there’s no hope to capture it exactly,” says Carrasquilla. “But there’s hope to capture enough information that we capture all the aspects that physicists care about. And if we do that, it’s basically indistinguishable from a true solution.”
And while strongly correlated systems are generally too hard to simulate classically, there are notable instances where this isn’t the case. That includes some systems that are relevant for modeling high-temperature superconductors, according to a 2023 paper in Nature Communications.
“Because of the exponential complexity, you can always find problems for which you can’t find a shortcut,” says Frank Noe, research manager at Microsoft Research, who has led much of the company’s work in this area. “But I think the number of systems for which you can’t find a good shortcut will just become much smaller.”
No magic bullets
However, Stefanie Czischek, an assistant professor of physics at the University of Ottawa, says it can be hard to predict what problems neural networks can feasibly solve. For some complex systems they do incredibly well, but then on other seemingly simple ones, computational costs balloon unexpectedly. “We don’t really know their limitations,” she says. “No one really knows yet what are the conditions that make it hard to represent systems using these neural networks.”
Meanwhile, there have also been significant advances in other classical quantum simulation techniques, says Antoine Georges, director of the Center for Computational Quantum Physics at the Flatiron Institute in New York, who also contributed to the recent Science benchmarking paper. “They are all successful in their own right, and they are also very complementary,” he says. “So I don’t think these machine-learning methods are just going to completely put all the other methods out of business.”
Quantum computers will also have their niche, says Martin Roetteler, senior director of quantum solutions at IonQ, which is developing quantum computers built from trapped ions. While he agrees that classical approaches will likely be sufficient for simulating weakly correlated systems, he’s confident that some large, strongly correlated systems will be beyond their reach. “The exponential is going to bite you,” he says. “There are cases with strongly correlated systems that we cannot treat classically. I’m strongly convinced that that’s the case.”
In contrast, he says, a future fault-tolerant quantum computer with many more qubits than today’s devices will be able to simulate such systems. This could help find new catalysts or improve understanding of metabolic processes in the body—an area of interest to the pharmaceutical industry.
Neural networks are likely to increase the scope of problems that can be solved, says Jay Gambetta, who leads IBM’s quantum computing efforts, but he’s unconvinced they’ll solve the hardest challenges businesses are interested in.
“That’s why many different companies that essentially have chemistry as their requirement are still investigating quantum—because they know exactly where these approximation methods break down,” he says.
Gambetta also rejects the idea that the technologies are rivals. He says the future of computing is likely to involve a hybrid of the two approaches, with quantum and classical subroutines working together to solve problems. “I don’t think they’re in competition. I think they actually add to each other,” he says.
But Scott Aaronson, who directs the Quantum Information Center at the University of Texas, says machine-learning approaches are directly competing against quantum computers in areas like quantum chemistry and condensed-matter physics. He predicts that a combination of machine learning and quantum simulations will outperform purely classical approaches in many cases, but that won’t become clear until larger, more reliable quantum computers are available.
“From the very beginning, I’ve treated quantum computing as first and foremost a scientific quest, with any industrial applications as icing on the cake,” he says. “So if quantum simulation turns out to beat classical machine learning only rarely, I won’t be quite as crestfallen as some of my colleagues.”
One area where quantum computers look likely to have a clear advantage is in simulating how complex quantum systems evolve over time, says EPFL’s Carleo. This could provide invaluable insights for scientists in fields like statistical mechanics and high-energy physics, but it seems unlikely to lead to practical uses in the near term. “These are more niche applications that, in my opinion, do not justify the massive investments and the massive hype,” Carleo adds.
Nonetheless, the experts MIT Technology Review spoke to said a lack of commercial applications is not a reason to stop pursuing quantum computing, which could lead to fundamental scientific breakthroughs in the long run.
“Science is like a set of nested boxes—you solve one problem and you find five other problems,” says Vicentini. “The complexity of the things we study will increase over time, so we will always need more powerful tools.”
Zelfs Microsoft Notepad krijgt kunstmatige intelligentie
Het bericht Zelfs Microsoft Notepad krijgt kunstmatige intelligentie verscheen eerst op DutchCowboys.
Perplexity Dove Into Real-Time Election Tracking While Other AI Companies Held Back
Amazon vertelt wat er in je serie gebeurde nadat je in slaap viel
Het bericht Amazon vertelt wat er in je serie gebeurde nadat je in slaap viel verscheen eerst op DutchCowboys.
This Is a Glimpse of the Future of AI Robots
OpenAI brings a new web search tool to ChatGPT
ChatGPT can now search the web for up-to-date answers to a user’s queries, OpenAI announced today.
Until now, ChatGPT was mostly restricted to generating answers from its training data, which is current up to October 2023 for GPT-4o, and had limited web search capabilities. Searches about generalized topics will still draw on this information from the model itself, but now ChatGPT will automatically search the web in response to queries about recent information such as sports, stocks, or news of the day, and can deliver rich multi-media results. Users can also manually trigger a web search, but for the most part, the chatbot will make its own decision about when an answer would benefit from information taken from the web, says Adam Fry, OpenAI’s product lead for search.
“Our goal is to make ChatGPT the smartest assistant, and now we’re really enhancing its capabilities in terms of what it has access to from the web,” Fry tells MIT Technology Review. The feature is available today for the chatbot’s paying users.
While ChatGPT search, as it is known, is initially available to paying customers, OpenAI intends to make it available for free later, even when people are logged out. The company also plans to combine search with its voice features and Canvas, its interactive platform for coding and writing, although these capabilities will not be available in today’s initial launch.
The company unveiled a standalone prototype of web search in July. Those capabilities are now built directly into the chatbot. OpenAI says it has “brought the best of the SearchGPT experience into ChatGPT.”
OpenAI is the latest tech company to debut an AI-powered search assistant, challenging similar tools from competitors such as Google, Microsoft, and startup Perplexity. Meta, too, is reportedly developing its own AI search engine. As with Perplexity’s interface, users of ChatGPT search can interact with the chatbot in natural language, and it will offer an AI-generated answer with sources and links to further reading. In contrast, Google’s AI Overviews offer a short AI-generated summary at the top of the website, as well as a traditional list of indexed links.
These new tools could eventually challenge Google’s 90% market share in online search. AI search is a very important way to draw more users, says Chirag Shah, a professor at the University of Washington, who specializes in online search. But he says it is unlikely to chip away at Google’s search dominance. Microsoft’s high-profile attempt with Bing barely made a dent in the market, Shah says.
Instead, OpenAI is trying to create a new market for more powerful and interactive AI agents, which can take complex actions in the real world, Shah says.
The new search function in ChatGPT is a step toward these agents.
It can also deliver highly contextualized responses that take advantage of chat histories, allowing users to go deeper in a search. Currently, ChatGPT search is able to recall conversation histories and continue the conversation with questions on the same topic.
ChatGPT itself can also remember things about users that it can use later —sometimes it does this automatically, or you can ask it to remember something. Those “long-term” memories affect how it responds to chats. Search doesn’t have this yet—a new web search starts from scratch— but it should get this capability in the “next couple of quarters,” says Fry. When it does, OpenAI says it will allow it to deliver far more personalized results based on what it knows.
“Those might be persistent memories, like ‘I’m a vegetarian,’ or it might be contextual, like ‘I’m going to New York in the next few days,’” says Fry. “If you say ‘I’m going to New York in four days,’ it can remember that fact and the nuance of that point,” he adds.
To help develop ChatGPT’s web search, OpenAI says it leveraged its partnerships with news organizations such as Reuters, the Atlantic, Le Monde, the Financial Times, Axel Springer, Condé Nast, and Time. However, its results include information not only from these publishers, but any other source online that does not actively block its search crawler.
It’s a positive development that ChatGPT will now be able to retrieve information from these reputable online sources and generate answers based on them, says Suzan Verberne, a professor of natural-language processing at Leiden University, who has studied information retrieval. It also allows users to ask follow-up questions.
But despite the enhanced ability to search the web and cross-check sources, the tool is not immune from the persistent tendency of AI language models to make things up or get it wrong. When MIT Technology Review tested the new search function and asked it for vacation destination ideas, ChatGPT suggested “luxury European destinations” such as Japan, Dubai, the Caribbean islands, Bali, the Seychelles, and Thailand. It offered as a source an article from the Times, a British newspaper, which listed these locations as well as those in Europe as luxury holiday options.
“Especially when you ask about untrue facts or events that never happened, the engine might still try to formulate a plausible response that is not necessarily correct,” says Verberne. There is also a risk that misinformation might seep into ChatGPT’s answers from the internet if the company has not filtered its sources well enough, she adds.
Another risk is that the current push to access the web through AI search will disrupt the internet’s digital economy, argues Benjamin Brooks, a fellow at Harvard University’s Berkman Klein Center, who previously led public policy for Stability AI, in an op-ed published by MIT Technology Review today.
“By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and ‘eyeballs’ they need to survive,” Brooks writes.
Chasing AI’s value in life sciences
Inspired by an unprecedented opportunity, the life sciences sector has gone all in on AI. For example, in 2023, Pfizer introduced an internal generative AI platform expected to deliver $750 million to $1 billion in value. And Moderna partnered with OpenAI in April 2024, scaling its AI efforts to deploy ChatGPT Enterprise, embedding the tool’s capabilities across business functions from legal to research.
In drug development, German pharmaceutical company Merck KGaA has partnered with several AI companies for drug discovery and development. And Exscientia, a pioneer in using AI in drug discovery, is taking more steps toward integrating generative AI drug design with robotic lab automation in collaboration with Amazon Web Services (AWS).
Given rising competition, higher customer expectations, and growing regulatory challenges, these investments are crucial. But to maximize their value, leaders must carefully consider how to balance the key factors of scope, scale, speed, and human-AI collaboration.
The early promise of connecting data
The common refrain from data leaders across all industries—but specifically from those within data-rich life sciences organizations—is “I have vast amounts of data all over my organization, but the people who need it can’t find it.” says Dan Sheeran, general manager of health care and life sciences for AWS. And in a complex healthcare ecosystem, data can come from multiple sources including hospitals, pharmacies, insurers, and patients.
“Addressing this challenge,” says Sheeran, “means applying metadata to all existing data and then creating tools to find it, mimicking the ease of a search engine. Until generative AI came along, though, creating that metadata was extremely time consuming.”
ZS’s global head of the digital and technology practice, Mahmood Majeed notes that his teams regularly work on connected data programs, because “connecting data to enable connected decisions across the enterprise gives you the ability to create differentiated experiences.”
Majeed points to Sanofi’s well-publicized example of connecting data with its analytics app, plai, which streamlines research and automates time-consuming data tasks. With this investment, Sanofi reports reducing research processes from weeks to hours and the potential to improve target identification in therapeutic areas like immunology, oncology, or neurology by 20% to 30%.
Achieving the payoff of personalization
Connected data also allows companies to focus on personalized last-mile experiences. This involves tailoring interactions with healthcare providers and understanding patients’ individual motivations, needs, and behaviors.
Early efforts around personalization have relied on “next best action” or “next best engagement” models to do this. These traditional machine learning (ML) models suggest the most appropriate information for field teams to share with healthcare providers, based on predetermined guidelines.
When compared with generative AI models, more traditional machine learning models can be inflexible, unable to adapt to individual provider needs, and they often struggle to connect with other data sources that could provide meaningful context. Therefore, the insights can be helpful but limited.
Sheeran notes that companies have a real opportunity to improve their ability to gain access to connected data for better decision-making processes, “Because the technology is generative, it can create context based on signals. How does this healthcare provider like to receive information? What insights can we draw about the questions they’re asking? Can their professional history or past prescribing behavior help us provide a more contextualized answer? This is exactly what generative AI is great for.”
Beyond this, pharmaceutical companies spend millions of dollars annually to customize marketing materials. They must ensure the content is translated, tailored to the audience and consistent with regulations for each location they offer products and services. A process that usually takes weeks to develop individual assets has become a perfect use case for generative copy and imagery. With generative AI, the process is reduced to from weeks to minutes and creates competitive advantage with lower costs per asset, Sheeran says.
Accelerating drug discovery with AI, one step at a time
Perhaps the greatest hope for AI in life sciences is its ability to generate insights and intellectual property using biology-specific foundation models. Sheeran says, “our customers have seen the potential for very, very large models to greatly accelerate certain discrete steps in the drug discovery and development processes.” He continues, “Now we have a much broader range of models available, and an even larger set of models coming that tackle other discrete steps.”
By Sheeran’s count, there are approximately six major categories of biology-specific models, each containing five to 25 models under development or already available from universities and commercial organizations.
The intellectual property generated by biology-specific models is a significant consideration, supported by services such as Amazon Bedrock, which ensures customers retain control over their data, with transparency and safeguards to prevent unauthorized retention and misuse.
Finding differentiation in life sciences with scope, scale, and speed
Organizations can differentiate with scope, scale, and speed, while determining how AI can best augment human ingenuity and judgment. “Technology has become so easy to access. It’s omnipresent. What that means is that it’s no longer a differentiator on its own,” says Majeed. He suggests that life sciences leaders consider:
Scope: Have we zeroed in on the right problem? By clearly articulating the problem relative to the few critical things that could drive advantage, organizations can identify technology and business collaborators and set standards for measuring success and driving tangible results.
Scale: What happens when we implement a technology solution on a large scale? The highest-priority AI solutions should be the ones with the most potential for results.Scale determines whether an AI initiative will have a broader, more widespread impact on a business, which provides the window for a greater return on investment, says Majeed.
By thinking through the implications of scale from the beginning, organizations can be clear on the magnitude of change they expect and how bold they need to be to achieve it. The boldest commitment to scale is when companies go all in on AI, as Sanofi is doing, setting goals to transform the entire value chain and setting the tone from the very top.
Speed: Are we set up to quickly learn and correct course? Organizations that can rapidly learn from their data and AI experiments, adjust based on those learnings, and continuously iterate are the ones that will see the most success. Majeed emphasizes, “Don’t underestimate this component; it’s where most of the work happens. A good partner will set you up for quick wins, keeping your teams learning and maintaining momentum.”
Sheeran adds, “ZS has become a trusted partner for AWS because our customers trust that they have the right domain expertise. A company like ZS has the ability to focus on the right uses of AI because they’re in the field and on the ground with medical professionals giving them the ability to constantly stay ahead of the curve by exploring the best ways to improve their current workflows.”
Human-AI collaboration at the heart
Despite the allure of generative AI, the human element is the ultimate determinant of how it’s used. In certain cases, traditional technologies outperform it, with less risk, so understanding what it’s good for is key. By cultivating broad technology and AI fluency throughout the organization, leaders can teach their people to find the most powerful combinations of human-AI collaboration for technology solutions that work. After all, as Majeed says, “it’s all about people—whether it’s customers, patients, or our own employees’ and users’ experiences.”
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
Meta’s Next Llama AI Models Are Training on a GPU Cluster ‘Bigger Than Anything’ Else
Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target
Betaworks focuses on AI applications in its latest Camp
For its most recent Camp, VC and accelerator Betaworks was on the lookout for startups building native applications made possible by AI. The program was first announced in May. To explain this focus, managing partner John Borthwick wrote at the time that while things like AI chatbots and writing assistants exist, “we aren’t yet living […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Een kwart van het programmeerwerk bij Google wordt gedaan door AI
Het bericht Een kwart van het programmeerwerk bij Google wordt gedaan door AI verscheen eerst op DutchCowboys.
OpenAI’s Transcription Tool Hallucinates. Hospitals Are Using It Anyway
What Can AI Chatbots Teach Us About How Humans Learn?
Do new AI tools like ChatGPT actually understand language the same way that humans do?
It turns out that even the inventors of these new large language models are debating that very question — and the answer will have huge implications for education and for all aspects of society if this technology can get to a point where it achieves what is known as Artificial General Intelligence, or AGI.
A new book by one of those AI pioneers digs into the origins of ChatGPT and the intersection of research on how the brain works and building new large language models for AI. It’s called “ChatGPT and the Future of AI,” and the author is Terrence Sejnowski, a professor of biology at the University of California, San Diego, where he co-directs the Institute for Neural Computation and the NSF Temporal Dynamics of Learning Center. He is also the Francis Crick Chair at the Salk Institute for Biological Studies.
Get EdSurge journalism delivered free to your inbox. Sign up for our newsletters.
Sejnowski started out as a physicist working on the origins of black holes, but early in his career he says he realized that it would be decades before new instruments could be built that could adequately measure the kinds of gravitational waves he was studying. So he switched to neuroscience, hoping to “pop the hood” on the human brain to better understand how it works.
“It seemed to me that the brain was just as mysterious as the cosmos,” he tells EdSurge. “And the advantage is you can do experiments in your own lab, and you don’t have to have a satellite.”
“What has really been revealed is that we don't understand what ‘understanding’ is,”— Terrence Sejnowski
For decades, Sejnowski has focused on applying findings from brain science to building computer models, working closely at times with the two researchers who just won the Nobel Prize this year for their work on AI, John Hopfield and Geoffrey Hinton.
These days, computing power and algorithms have advanced to the level where neuroscience and AI are helping to inform each other, and even challenge our traditional understanding of what thinking is all about, he says.
“What has really been revealed is that we don't understand what ‘understanding’ is,” says Sejnowski. “We use the word, and we think we understand what it means, but we don't know how the brain understands something. We can record from neurons, but that doesn't really tell you how it functions and what’s really going on when you’re thinking.”
He says that new chatbots have the potential to revolutionize learning if they can deliver on the promise of being personal tutors to students. One drawback of the current approach, he says, is that LLMs focus on only one aspect of how the human brain organizes information, whereas “there are a hundred brain parts that are left out that are important for survival, autonomy for being able to maintain activity and awareness.” And it’s possible that those other parts of what makes us human may need to be simulated as well for something like tutoring to be most effective, he suggests.
The researcher warns that there are likely to be negative unintended consequences to ChatGPT and other technologies, just as social media led to the rise of misinformation and other challenges. He says there will need to be regulation, but that “we won't really know what to regulate until it really is out there and it's being used and we see what the impact is, how it's used.”
But he predicts that soon most of us will no longer use keyboards to interact with computers, instead using voice commands to have dialogues with all kinds of devices in our lives. “You’ll be able to go into your car and talk to the car and say, ‘How are you feeling today?’ [and it might say,] ‘Well, we're running low on gas.’ Oh, OK, where's the nearest gas station? Here, let me take you there.”
Listen to our conversation with Sejnowski on this week’s EdSurge Podcast, where he describes research to more fully simulate human brains. He also talks about his previous project in education, a free online course he co-teaches called “Learning How to Learn,” which is one of the most popular courses ever made, with more than 4 million students signed up over the past 10 years.
Cultivating the next generation of AI innovators in a global tech hub
A few years ago, I had to make one of the biggest decisions of my life: continue as a professor at the University of Melbourne or move to another part of the world to help build a brand new university focused entirely on artificial intelligence.
With the rapid development we have seen in AI over the past few years, I came to the realization that educating the next generation of AI innovators in an inclusive way and sharing the benefits of technology across the globe is more important than maintaining the status quo. I therefore packed my bags for the Mohammed bin Zayed University of Artificial Intelligence (MBZUAI) in Abu Dhabi.
The world in all its complexity
Today, the rewards of AI are mostly enjoyed by a few countries in what the Oxford Internet Institute dubs the “Compute North.” These countries, such as the US, the U.K., France, Canada, and China, have dominated research and development, and built state of the art AI infrastructure capable of training foundational models. This should come as no surprise, as these countries are home to many of the world’s top universities and large tech corporations.
But this concentration of innovation comes at a cost for the billions of people who live outside these dominant countries and have different cultural backgrounds.
Large language models (LLMs) are illustrative of this disparity. Researchers have shown that many of the most popular multilingual LLMs perform poorly with languages other than English, Chinese, and a handful of other (mostly) European languages. Yet, there are approximately 6,000 languages spoken today, many of them in communities in Africa, Asia, and South America. Arabic alone is spoken by almost 400 million people and Hindi has 575 million speakers around the world.
For example, LLaMA 2 performs up to 50% better in English compared to Arabic, when measured using the LM-Evaluation-Harness framework. Meanwhile, Jais, an LLM co-developed by MBZUAI, exceeds LLaMA 2 in Arabic and is comparable to Meta’s model in English (see table below).
The chart shows that the only way to develop AI applications that work for everyone is by creating new institutions outside the Compute North that consistently and conscientiously invest in building tools designed for the thousands of language communities across the world.
Environments of innovation
One way to design new institutions is to study history and understand how today’s centers of gravity in AI research emerged decades ago. Before Silicon Valley earned its reputation as the center of global technological innovation, it was called Santa Clara Valley and was known for its prune farms. However, the main catalyst was Stanford University, which had built a reputation as one of the best places in the world to study electrical engineering. Over the years, through a combination of government-led investment through grants and focused research, the university birthed countless inventions that advanced computing and created a culture of entrepreneurship. The results speak for themselves: Stanford alumni have founded companies such as Alphabet, NVIDIA, Netflix, and PayPal, to name a few.
Today, like MBZUAI’s predecessor in Santa Clara Valley, we have an opportunity to build a new technology hub centered around a university.
And that’s why I chose to join MBZUAI, the world’s first research university focused entirely on AI. From MBZUAI’s position at the geographical crossroads of East and West, our goal is to attract the brightest minds from around the world and equip them with the tools they need to push the boundaries of AI research and development.
A community for inclusive AI
MBZUAI’s student body comes from more than 50 different countries around the globe. It has attracted top researchers such as Monojit Choudhury from Microsoft, Elizabeth Churchill from Google, Ted Briscoe from the University of Cambridge, Sami Haddadin from the Technical University of Munich, and Yoshihiko Nakamura from the University of Tokyo, just to name a few.
These scientists may be from different places but they’ve found a common purpose at MBZUAI with our interdisciplinary nature, relentless focus on making AI a force for global progress, and emphasis on collaboration across disciplines such as robotics, NLP, machine learning, and computer vision.
In addition to traditional AI disciplines, MBZUAI has built departments in sibling areas that can both contribute to and benefit from AI, including human computer interaction, statistics and data science, and computational biology.
Abu Dhabi’s commitment to MBZUAI is part of a broader vision for AI that extends beyond academia. MBZUAI’s scientists have collaborated with G42, an Abu Dhabi-based tech company, on Jais, an Arabic-centric LLM that is the highest-performing open-weight Arabic LLM; and also NANDA, an advanced Hindi LLM. MBZUAI’s Institute of Foundational Models has created LLM360, an initiative designed to level the playing field of large model research and development by publishing fully open source models and datasets that are competitive with closed source or open weights models available from tech companies in North America or China.
MBZUAI is also developing language models that specialize in Turkic languages, which have traditionally been underrepresented in NLP, yet are spoken by millions of people.
Another recent project has brought together native speakers of 26 languages from 28 different countries to compile a benchmark dataset that evaluates the performance of vision language models and their ability to understand cultural nuances in images.
These kinds of efforts to expand the capabilities of AI to broader communities are necessary if we want to maintain the world’s cultural diversity and provide everyone with AI tools that are useful to them. At MBZUAI, we have created a unique mix of students and faculty to drive globally-inclusive AI innovation for the future. By building a broad community of scientists, entrepreneurs, and thinkers, the university is increasingly establishing itself as a driving force in AI innovation that extends far beyond Abu Dhabi, with the goal of developing technologies that are inclusive for the world’s diverse languages and culture.
This content was produced by the Mohamed bin Zayed University of Artificial Intelligence. It was not written by MIT Technology Review’s editorial staff.