Normal view

There are new articles available, click to refresh the page.
Today — 20 September 2024Main stream

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

19 September 2024 at 23:02
Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

Enlarge (credit: NurPhoto / Contributor | NurPhoto)

After California passed laws cracking down on AI-generated deepfakes of election-related content, a popular conservative influencer promptly sued, accusing California of censoring protected speech, including satire and parody.

In his complaint, Christopher Kohls—who is known as "Mr Reagan" on YouTube and X (formerly Twitter)—said that he was suing "to defend all Americans’ right to satirize politicians." He claimed that California laws, AB 2655 and AB 2839, were urgently passed after X owner Elon Musk shared a partly AI-generated parody video on the social media platform that Kohls created to "lampoon" presidential hopeful Kamala Harris.

AB 2655, known as the "Defending Democracy from Deepfake Deception Act," prohibits creating "with actual malice" any "materially deceptive audio or visual media of a candidate for elective office with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, within 60 days of the election." It requires social media platforms to block or remove any reported deceptive material and label "certain additional content" deemed "inauthentic, fake, or false" to prevent election interference.

Read 25 remaining paragraphs | Comments

Yesterday — 19 September 2024Main stream

IEEE-USA’s New Guide Helps Companies Navigate AI Risks



Organizations that develop or deploy artificial intelligence systems know that the use of AI entails a diverse array of risks including legal and regulatory consequences, potential reputational damage, and ethical issues such as bias and lack of transparency. They also know that with good governance, they can mitigate the risks and ensure that AI systems are developed and used responsibly. The objectives include ensuring that the systems are fair, transparent, accountable, and beneficial to society.

Even organizations that are striving for responsible AI struggle to evaluate whether they are meeting their goals. That’s why the IEEE-USA AI Policy Committee published “A Flexible Maturity Model for AI Governance Based on the NIST AI Risk Management Framework,” which helps organizations assess and track their progress. The maturity model is based on guidance laid out in the U.S. National Institute of Standards and Technology’s AI Risk Management Framework (RMF) and other NIST documents.

Building on NIST’s work

NIST’s RMF, a well-respected document on AI governance, describes best practices for AI risk management. But the framework does not provide specific guidance on how organizations might evolve toward the best practices it outlines, nor does it suggest how organizations can evaluate the extent to which they’re following the guidelines. Organizations therefore can struggle with questions about how to implement the framework. What’s more, external stakeholders including investors and consumers can find it challenging to use the document to assess the practices of an AI provider.

The new IEEE-USA maturity model complements the RMF, enabling organizations to determine their stage along their responsible AI governance journey, track their progress, and create a road map for improvement. Maturity models are tools for measuring an organization’s degree of engagement or compliance with a technical standard and its ability to continuously improve in a particular discipline. Organizations have used the models since the 1980a to help them assess and develop complex capabilities.

The framework’s activities are built around the RMF’s four pillars, which enable dialogue, understanding, and activities to manage AI risks and responsibility in developing trustworthy AI systems. The pillars are:

  • Map: The context is recognized, and risks relating to the context are identified.
  • Measure: Identified risks are assessed, analyzed, or tracked.
  • Manage: Risks are prioritized and acted upon based on a projected impact.
  • Govern: A culture of risk management is cultivated and present.

A flexible questionnaire

The foundation of the IEEE-USA maturity model is a flexible questionnaire based on the RMF. The questionnaire has a list of statements, each of which covers one or more of the recommended RMF activities. For example, one statement is: “We evaluate and document bias and fairness issues caused by our AI systems.” The statements focus on concrete, verifiable actions that companies can perform while avoiding general and abstract statements such as “Our AI systems are fair.”

The statements are organized into topics that align with the RFM’s pillars. Topics, in turn, are organized into the stages of the AI development life cycle, as described in the RMF: planning and design, data collection and model building, and deployment. An evaluator who’s assessing an AI system at a particular stage can easily examine only the relevant topics.

Scoring guidelines

The maturity model includes these scoring guidelines, which reflect the ideals set out in the RMF:

  • Robustness, extending from ad-hoc to systematic implementation of the activities.
  • Coverage, ranging from engaging in none of the activities to engaging in all of them.
  • Input diversity, ranging from having activities informed by inputs from a single team to diverse input from internal and external stakeholders.

Evaluators can choose to assess individual statements or larger topics, thus controlling the level of granularity of the assessment. In addition, the evaluators are meant to provide documentary evidence to explain their assigned scores. The evidence can include internal company documents such as procedure manuals, as well as annual reports, news articles, and other external material.

After scoring individual statements or topics, evaluators aggregate the results to get an overall score. The maturity model allows for flexibility, depending on the evaluator’s interests. For example, scores can be aggregated by the NIST pillars, producing scores for the “map,” “measure,” “manage,” and “govern” functions.

When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance.

The aggregation can expose systematic weaknesses in an organization’s approach to AI responsibility. If a company’s score is high for “govern” activities but low for the other pillars, for example, it might be creating sound policies that aren’t being implemented.

Another option for scoring is to aggregate the numbers by some of the dimensions of AI responsibility highlighted in the RMF: performance, fairness, privacy, ecology, transparency, security, explainability, safety, and third-party (intellectual property and copyright). This aggregation method can help determine if organizations are ignoring certain issues. Some organizations, for example, might boast about their AI responsibility based on their activity in a handful of risk areas while ignoring other categories.

A road toward better decision-making

When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance. The model enables companies to set goals and track their progress through repeated evaluations. Investors, buyers, consumers, and other external stakeholders can employ the model to inform decisions about the company and its products.

When used by internal or external stakeholders, the new IEEE-USA maturity model can complement the NIST AI RMF and help track an organization’s progress along the path of responsible governance.

AI-generated content doesn’t seem to have swayed recent European elections 

19 September 2024 at 01:01

AI-generated falsehoods and deepfakes seem to have had no effect on election results in the UK, France, and the European Parliament this year, according to new research. 

Since the beginning of the generative-AI boom, there has been widespread fear that AI tools could boost bad actors’ ability to spread fake content with the potential to interfere with elections or even sway the results. Such worries were particularly heightened this year, when billions of people were expected to vote in over 70 countries. 

Those fears seem to have been unwarranted, says Sam Stockwell, the researcher at the Alan Turing Institute who conducted the study. He focused on three elections over a four-month period from May to August 2024, collecting data on public reports and news articles on AI misuse. Stockwell identified 16 cases of AI-enabled falsehoods or deepfakes that went viral during the UK general election and only 11 cases in the EU and French elections combined, none of which appeared to definitively sway the results. The fake AI content was created by both domestic actors and groups linked to hostile countries such as Russia. 

These findings are in line with recent warnings from experts that the focus on election interference is distracting us from deeper and longer-lasting threats to democracy.   

AI-generated content seems to have been ineffective as a disinformation tool in most European elections this year so far. This, Stockwell says, is because most of the people who were exposed to the disinformation already believed its underlying message (for example, that levels of immigration to their country are too high). Stockwell’s analysis showed that people who were actively engaging with these deepfake messages by resharing and amplifying them had some affiliation or previously expressed views that aligned with the content. So the material was more likely to strengthen preexisting views than to influence undecided voters. 

Tried-and-tested election interference tactics, such as flooding comment sections with bots and exploiting influencers to spread falsehoods, remained far more effective. Bad actors mostly used generative AI to rewrite news articles with their own spin or to create more online content for disinformation purposes. 

“AI is not really providing much of an advantage for now, as existing, simpler methods of creating false or misleading information continue to be prevalent,” says Felix Simon, a researcher at the Reuters Institute for Journalism, who was not involved in the research. 

However, it’s hard to draw firm conclusions about AI’s impact upon elections at this stage, says Samuel Woolley, a disinformation expert at the University of Pittsburgh. That’s in part because we don’t have enough data.

“There are less obvious, less trackable, downstream impacts related to uses of these tools that alter civic engagement,” he adds.

Stockwell agrees: Early evidence from these elections suggests that AI-generated content could be more effective for harassing politicians and sowing confusion than changing people’s opinions on a large scale. 

Politicians in the UK, such as former prime minister Rishi Sunak, were targeted by AI deepfakes that, for example, showed them promoting scams or admitting to financial corruption. Female candidates were also targeted with nonconsensual sexual deepfake content, intended to disparage and intimidate them. 

“There is, of course, a risk that in the long run, the more that political candidates are on the receiving end of online harassment, death threats, deepfake pornographic smears—that can have a real chilling effect on their willingness to, say, participate in future elections, but also obviously harm their well-being,” says Stockwell. 

Perhaps more worrying, Stockwell says, his research indicates that people are increasingly unable to discern the difference between authentic and AI-generated content in the election context. Politicians are also taking advantage of that. For example, political candidates in the European Parliament elections in France have shared AI-generated content amplifying anti-immigration narratives without disclosing that they’d been made with AI. 

“This covert engagement, combined with a lack of transparency, presents in my view a potentially greater risk to the integrity of political processes than the use of AI by the general population or so-called ‘bad actors,’” says Simon. 

Before yesterdayMain stream

How AI Can Foster Creative Thinking in the Classroom and Beyond

18 September 2024 at 18:55

For many years, educators have envisioned personalized learning as a way to tailor education to each student's unique needs. With advances in artificial intelligence, this vision is becoming a reality. AI has the potential to transform classrooms by offering personalized learning experiences that align with individual strengths, interests and learning needs.

At the same time, there is a growing emphasis on fostering creativity and authenticity in student work. AI can play a pivotal role in supporting the creative process, from generating ideas to refining projects. By making the creative process more explicit and accessible, AI empowers students to overcome obstacles and express their unique perspectives. This approach not only boosts engagement but also prepares students for a future where creative thinking and problem-solving are indispensable skills.

Brian Johnsrud
Director of Education Learning and Advocacy, Adobe

Recently, EdSurge spoke with Brian Johnsrud, the director of education learning and advocacy at Adobe, about using educational tools that not only harness the power of AI but also uphold the creative integrity of students and teachers. He highlights how AI can help personalize learning by allowing students to present their understanding and ideas in diverse and individualized ways. This shift from standardized assignments to personalized projects can make learning more engaging and relevant for each student.

EdSurge: How can educators safely and responsibly leverage AI for more personalized learning?

Johnsrud: The dream of learning personalization has been around for decades. The first phase really focused on getting the right content to the right student at the right time. Now, with AI, we're in the second phase, which isn't just about personalizing content but also about how students present their understanding and share their knowledge. Because a hallmark of creativity is uniqueness. So if we want students to be doing creative thinking, then 30 assignments done by 30 different students should all look different.

As for deploying AI safely and responsibly, schools are paying attention to a number of things right now. The first step is to check if the AI tool is actually designed for education specifically. If it wasn't made for the classroom, it probably wasn't made to improve learning. It won’t necessarily have those pedagogical pieces baked in or the accessibility and other edtech integrations that you need.

Check if the AI tool is actually designed for education specifically. If it wasn't made for the classroom, it probably wasn't made to improve learning. It won’t necessarily have those pedagogical pieces baked in or the accessibility and other edtech integrations that you need.

— Brian Johnsrud

Part of being designed for safety and responsibility includes ensuring that the tools don't train their models on student or teacher projects because the creative work you develop as a teacher or student in the classroom should be respected and protected. If you're using a tool that benefits or takes inspiration from your creative masterpiece, it's not truly aligned with core creative values and academic integrity.

In what ways does AI help foster creativity while ensuring that student work remains authentic?

AI can support any part of the creative process. If a student is stuck in brainstorming, AI can help generate multiple ideas. If another student is good at brainstorming but needs help refining their work, AI can act as a thought partner, providing critique. This is what's exciting about AI designed for creativity! It makes the steps of the creative process explicit and helps students overcome obstacles. It removes that fear of the blank canvas.

I hope AI helps shift the focus from teachers being the content creators to students taking on that role. As an example inspired by my time as a social studies teacher, instead of asking students to write a paragraph about continuity and change in a historical era, you could have them choose an era, pick a topic that shows continuity, and design an imaginary propaganda poster from that period. The benefits of this creative assignment are clear to every educator. But with rigid standards and a packed curriculum, it's challenging to dedicate two weeks to it. The good news is, with AI, you could complete this assignment in just 30 minutes during class.

Interestingly, we crave authenticity more than ever in the age of AI. AI tools are moving beyond the basic prompt-and-result, “grab and go” approach. They're becoming integrated into our creative workflows, allowing us to bring our best ideas to life and express ourselves more genuinely. The goal isn't for AI to do the work for us but to help us create more authentic, meaningful content so we can be impactful storytellers. As a teacher, you should be able to see each student's unique voice in the work they produce.

The goal isn't for AI to do the work for us but to help us create more authentic, meaningful content so we can be impactful storytellers.

— Johnsrud

How do AI literacy and creative thinking equip students for future job market demands?

In just a few years, AI skills have become essential. The 2024 Work Trend Index Report found that 66 percent of industry leaders wouldn't hire someone without AI skills. It's amazing how quickly this has become a hiring dealbreaker. In that same report, 71 percent of leaders said they're more likely to hire a less experienced candidate with AI skills than a more experienced candidate without them. For students, this means having AI skills can level the playing field with more seasoned professionals.

At the same time, creativity and creative thinking are also in high demand. The World Economic Forum's 2023 Future of Jobs Report highlighted creative thinking as a top skill for the future. The creator economy is booming, with 200,000 new creative jobs created in the United States in 2023 alone. Students who can combine AI skills with creative problem-solving are able to seize some pretty incredible opportunities.

Research has shown that the more students are able to create, the more they thrive. And AI opens up more opportunities for student creation. A 2019 Gallup report found that educators who focus on creativity and use technology in transformative ways see significant gains — students are more engaged, demonstrate better critical thinking, retain more, make connections between subjects and achieve deeper learning. For educators, seeing students excited and proud of their work is incredibly rewarding, especially in a time of increased teacher burnout.

How can educators easily incorporate creative thinking into their lessons?

Start by identifying areas in your curriculum where students need to dive deep into a concept or fully demonstrate their understanding. These are the moments where creative activities can replace traditional methods like note-taking or multiple-choice questions and garner a much wider and deeper set of learning outcomes.

© Image Credit: Billion Photos / Shutterstock

How AI Can Foster Creative Thinking in the Classroom and Beyond

Apple Intelligence will support German, Italian, Korean, Portuguese, and Vietnamese in 2025

18 September 2024 at 14:00

Apple announced Wednesday that its generative AI offering will be available in even more languages in 2025. Additions to Apple Intelligence include English (India), English (Singapore), German, Italian, Korean, Portuguese, Vietnamese, and “others” yet to be announced. The feature will launch in American English, when it arrives as part of the iOS 18.1 update. The […]

© 2024 TechCrunch. All rights reserved. For personal use only.

There are more than 120 AI bills in Congress right now

18 September 2024 at 11:30

More than 120 bills related to regulating artificial intelligence are currently floating around the US Congress.

They’re pretty varied. One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training.  Three deal with mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.

The flood of bills is indicative of the desperation Congress feels to keep up with the rapid pace of technological improvements. “There is a sense of urgency. There’s a commitment to addressing this issue, because it is developing so quickly and because it is so crucial to our economy,” says Heather Vaughan, director of communications for the US House of Representatives Committee on Science, Space, and Technology.

Because of the way Congress works, the majority of these bills will never make it into law. But simply taking a look at all the different bills that are in motion can give us insight into policymakers’ current preoccupations: where they think the dangers are, what each party is focusing on, and more broadly, what vision the US is pursuing when it comes to AI and how it should be regulated.

That’s why, with help from the Brennan Center for Justice, which created a tracker with all the AI bills circulating in various committees in Congress right now, MIT Technology Review has taken a closer look to see if there’s anything we can learn from this legislative smorgasbord. 

As you can see, it can seem as if Congress is trying to do everything at once when it comes to AI. To get a better sense of what may actually pass, it’s useful to look at what bills are moving along to potentially become law. 

A bill typically needs to pass a committee, or a smaller body of Congress, before it is voted on by the whole Congress. Many will fall short at this stage, while others will simply be introduced and then never spoken of again. This happens because there are so many bills presented in each session, and not all of them are given equal consideration. If the leaders of a party don’t feel a bill from one of its members can pass, they may not even try to push it forward. And then, depending on the makeup of Congress, a bill’s sponsor usually needs to get some members of the opposite party to support it for it to pass. In the current polarized US political climate, that task can be herculean. 

Congress has passed legislation on artificial intelligence before. Back in 2020, the National AI Initiative Act was part of the Defense Authorization Act, which invested resources in AI research and provided support for public education and workforce training on AI.

And some of the current bills are making their way through the system. The Senate Commerce Committee pushed through five AI-related bills at the end of July. The bills focused on authorizing the newly formed US AI Safety Institute (AISI) to create test beds and voluntary guidelines for AI models. The other bills focused on expanding education on AI, establishing public computing resources for AI research, and criminalizing the publication of deepfake pornography. The next step would be to put the bills on the congressional calendar to be voted on, debated, or amended.

“The US AI Safety Institute, as a place to have consortium building and easy collaboration between corporate and civil society actors, is amazing. It’s exactly what we need,” says Yacine Jernite, an AI researcher at Hugging Face.

The progress of these bills is a positive development, says Varun Krovi, executive director of the Center for AI Safety Action Fund. “We need to codify the US AI Safety Institute into law if you want to maintain our leadership on the global stage when it comes to standards development,” he says. “And we need to make sure that we pass a bill that provides computing capacity required for startups, small businesses, and academia to pursue AI.”

Following the Senate’s lead, the House Committee on Science, Space, and Technology just passed nine more bills regarding AI on September 11. Those bills focused on improving education on AI in schools, directing the National Institute of Standards and Technology (NIST) to establish guidelines for artificial-intelligence systems, and expanding the workforce of AI experts. These bills were chosen because they have a narrower focus and thus might not get bogged down in big ideological battles on AI, says Vaughan.

“It was a day that culminated from a lot of work. We’ve had a lot of time to hear from members and stakeholders. We’ve had years of hearings and fact-finding briefings on artificial intelligence,” says Representative Haley Stevens, one of the Democratic members of the House committee.

Many of the bills specify that any guidance they propose for the industry is nonbinding and that the goal is to work with companies to ensure safe development rather than curtail innovation. 

For example, one of the bills from the House, the AI Development Practices Act, directs NIST to establish “voluntary guidance for practices and guidelines relating to the development … of AI systems” and a “voluntary risk management framework.” Another bill, the AI Advancement and Reliability Act, has similar language. It supports “the development of voluntary best practices and technical standards” for evaluating AI systems. 

“Each bill contributes to advancing AI in a safe, reliable, and trustworthy manner while fostering the technology’s growth and progress through innovation and vital R&D,” committee chairman Frank Lucas, an Oklahoma Republican, said in a press release on the bills coming out of the House.

“It’s emblematic of the approach that the US has taken when it comes to tech policy. We hope that we would move on from voluntary agreements to mandating them,” says Krovi.

Avoiding mandates is a practical matter for the House committee. “Republicans don’t go in for mandates for the most part. They generally aren’t going to go for that. So we would have a hard time getting support,” says Vaughan. “We’ve heard concerns about stifling innovation, and that’s not the approach that we want to take.” When MIT Technology Review asked about the origin of these concerns, they were attributed to unidentified “third parties.” 

And fears of slowing innovation don’t just come from the Republican side. “What’s most important to me is that the United States of America is establishing aggressive rules of the road on the international stage,” says Stevens. “It’s concerning to me that actors within the Chinese Communist Party could outpace us on these technological advancements.”

But these bills come at a time when big tech companies have ramped up lobbying efforts on AI. “Industry lobbyists are in an interesting predicament—their CEOs have said that they want more AI regulation, so it’s hard for them to visibly push to kill all AI regulation,” says David Evan Harris, who teaches courses on AI ethics at the University of California, Berkeley. “On the bills that they don’t blatantly try to kill, they instead try to make them meaningless by pushing to transform the language in the bills to make compliance optional and enforcement impossible.”

“A [voluntary commitment] is something that is also only accessible to the largest companies,” says Jernite at Hugging Face, claiming that sometimes the ambiguous nature of voluntary commitments allows big companies to set definitions for themselves. “If you have a voluntary commitment—that is, ‘We’re going to develop state-of-the-art watermarking technology’—you don’t know what state-of-the-art means. It doesn’t come with any of the concrete things that make regulation work.”

“We are in a very aggressive policy conversation about how to do this right, and how this carrot and stick is actually going to work,” says Stevens, indicating that Congress may ultimately draw red lines that AI companies must not cross.

There are other interesting insights to be gleaned from looking at the bills all together. Two-thirds of the AI bills are sponsored by Democrats. This isn’t too surprising, since some House Republicans have claimed to want no AI regulations, believing that guardrails will slow down progress.

The topics of the bills (as specified by Congress) are dominated by science, tech, and communications (28%), commerce (22%), updating government operations (18%), and national security (9%). Topics that don’t receive much attention include labor and employment (2%), environmental protection (1%), and civil rights, civil liberties, and minority issues (1%).

The lack of a focus on equity and minority issues came into view during the Senate markup session at the end of July. Senator Ted Cruz, a Republican, added an amendment that explicitly prohibits any action “to ensure inclusivity and equity in the creation, design, or development of the technology.” Cruz said regulatory action might slow US progress in AI, allowing the country to fall behind China.

On the House side, there was also a hesitation to work on bills dealing with biases in AI models. “None of our bills are addressing that. That’s one of the more ideological issues that we’re not moving forward on,” says Vaughan.

The lead Democrat on the House committee, Representative Zoe Lofgren, told MIT Technology Review, “It is surprising and disappointing if any of my Republican colleagues have made that comment about bias in AI systems. We shouldn’t tolerate discrimination that’s overt and intentional any more than we should tolerate discrimination that occurs because of bias in AI systems. I’m not really sure how anyone can argue against that.”

After publication, Vaughan clarified that “[Bias] is one of the bigger, more cross-cutting issues, unlike the narrow, practical bills we considered that week. But we do care about bias as an issue,” and she expects it to be addressed within an upcoming House Task Force report.

One issue that may rise above the partisan divide is deepfakes. The Defiance Act, one of several bills addressing them, is cosponsored by a Democratic senator, Amy Klobuchar, and a Republican senator, Josh Hawley. Deepfakes have already been abused in elections; for example, someone faked Joe Biden’s voice for a robocall to tell citizens not to vote. And the technology has been weaponized to victimize people by incorporating their images into pornography without their consent. 

“I certainly think that there is more bipartisan support for action on these issues than on many others,” says Daniel Weiner, director of the Brennan Center’s Elections & Government Program. “But it remains to be seen whether that’s going to win out against some of the more traditional ideological divisions that tend to arise around these issues.” 

Although none of the current slate of bills have resulted in laws yet, the task of regulating any new technology, and specifically advanced AI systems that no one entirely understands, is difficult. The fact that Congress is making any progress at all may be surprising in itself. 

“Congress is not sleeping on this by any stretch of the means,” says Stevens. “We are evaluating and asking the right questions and also working alongside our partners in the Biden-Harris administration to get us to the best place for the harnessing of artificial intelligence.”

Update: We added further comments from the Republican spokesperson.

How and Why Gary Marcus Became AI's Leading Critic



Maybe you’ve read about Gary Marcus’s testimony before the Senate in May of 2023, when he sat next to Sam Altman and called for strict regulation of Altman’s company, OpenAI, as well as the other tech companies that were suddenly all-in on generative AI. Maybe you’ve caught some of his arguments on Twitter with Geoffrey Hinton and Yann LeCun, two of the so-called “godfathers of AI.” One way or another, most people who are paying attention to artificial intelligence today know Gary Marcus’s name, and know that he is not happy with the current state of AI.

He lays out his concerns in full in his new book, Taming Silicon Valley: How We Can Ensure That AI Works for Us, which was published today by MIT Press. Marcus goes through the immediate dangers posed by generative AI, which include things like mass-produced disinformation, the easy creation of deepfake pornography, and the theft of creative intellectual property to train new models (he doesn’t include an AI apocalypse as a danger, he’s not a doomer). He also takes issue with how Silicon Valley has manipulated public opinion and government policy, and explains his ideas for regulating AI companies.

Marcus studied cognitive science under the legendary Steven Pinker, was a professor at New York University for many years, and co-founded two AI companies, Geometric Intelligence and Robust.AI. He spoke with IEEE Spectrum about his path to this point.

What was your first introduction to AI?

portrait of a man wearing a red checkered shirt and a black jacket with glasses Gary MarcusBen Wong

Gary Marcus: Well, I started coding when I was eight years old. One of the reasons I was able to skip the last two years of high school was because I wrote a Latin-to-English translator in the programming language Logo on my Commodore 64. So I was already, by the time I was 16, in college and working on AI and cognitive science.

So you were already interested in AI, but you studied cognitive science both in undergrad and for your Ph.D. at MIT.

Marcus: Part of why I went into cognitive science is I thought maybe if I understood how people think, it might lead to new approaches to AI. I suspect we need to take a broad view of how the human mind works if we’re to build really advanced AI. As a scientist and a philosopher, I would say it’s still unknown how we will build artificial general intelligence or even just trustworthy general AI. But we have not been able to do that with these big statistical models, and we have given them a huge chance. There’s basically been $75 billion spent on generative AI, another $100 billion on driverless cars. And neither of them has really yielded stable AI that we can trust. We don’t know for sure what we need to do, but we have very good reason to think that merely scaling things up will not work. The current approach keeps coming up against the same problems over and over again.

What do you see as the main problems it keeps coming up against?

Marcus: Number one is hallucinations. These systems smear together a lot of words, and they come up with things that are true sometimes and not others. Like saying that I have a pet chicken named Henrietta is just not true. And they do this a lot. We’ve seen this play out, for example, in lawyers writing briefs with made-up cases.

Second, their reasoning is very poor. My favorite examples lately are these river-crossing word problems where you have a man and a cabbage and a wolf and a goat that have to get across. The system has a lot of memorized examples, but it doesn’t really understand what’s going on. If you give it a simpler problem, like one Doug Hofstadter sent to me, like: “A man and a woman have a boat and want to get across the river. What do they do?” It comes up with this crazy solution where the man goes across the river, leaves the boat there, swims back, something or other happens.

Sometimes he brings a cabbage along, just for fun.

Marcus: So those are boneheaded errors of reasoning where there’s something obviously amiss. Every time we point these errors out somebody says, “Yeah, but we’ll get more data. We’ll get it fixed.” Well, I’ve been hearing that for almost 30 years. And although there is some progress, the core problems have not changed.

Let’s go back to 2014 when you founded your first AI company, Geometric Intelligence. At that time, I imagine you were feeling more bullish on AI?

Marcus: Yeah, I was a lot more bullish. I was not only more bullish on the technical side. I was also more bullish about people using AI for good. AI used to feel like a small research community of people that really wanted to help the world.

So when did the disillusionment and doubt creep in?

Marcus: In 2018 I already thought deep learning was getting overhyped. That year I wrote this piece called “Deep Learning, a Critical Appraisal,” which Yann LeCun really hated at the time. I already wasn’t happy with this approach and I didn’t think it was likely to succeed. But that’s not the same as being disillusioned, right?

Then when large language models became popular [around 2019], I immediately thought they were a bad idea. I just thought this is the wrong way to pursue AI from a philosophical and technical perspective. And it became clear that the media and some people in machine learning were getting seduced by hype. That bothered me. So I was writing pieces about GPT-3 [an early version of OpenAI's large language model] being a bullshit artist in 2020. As a scientist, I was pretty disappointed in the field at that point. And then things got much worse when ChatGPT came out in 2022, and most of the world lost all perspective. I began to get more and more concerned about misinformation and how large language models were going to potentiate that.

You’ve been concerned not just about the startups, but also the big entrenched tech companies that jumped on the generative AI bandwagon, right? Like Microsoft, which has partnered with OpenAI?

Marcus: The last straw that made me move from doing research in AI to working on policy was when it became clear that Microsoft was going to race ahead no matter what. That was very different from 2016 when they released [an early chatbot named] Tay. It was bad, they took it off the market 12 hours later, and then Brad Smith wrote a book about responsible AI and what they had learned. But by the end of the month of February 2023, it was clear that Microsoft had really changed how they were thinking about this. And then they had this ridiculous “Sparks of AGI” paper, which I think was the ultimate in hype. And they didn’t take down Sydney after the crazy Kevin Roose conversation where [the chatbot] Sydney told him to get a divorce and all this stuff. It just became clear to me that the mood and the values of Silicon Valley had really changed, and not in a good way.

I also became disillusioned with the U.S. government. I think the Biden administration did a good job with its executive order. But it became clear that the Senate was not going to take the action that it needed. I spoke at the Senate in May 2023. At the time, I felt like both parties recognized that we can’t just leave all this to self-regulation. And then I became disillusioned [with Congress] over the course of the last year, and that’s what led to writing this book.

You talk a lot about the risks inherent in today’s generative AI technology. But then you also say, “It doesn’t work very well.” Are those two views coherent?

Marcus: There was a headline: “Gary Marcus Used to Call AI Stupid, Now He Calls It Dangerous.” The implication was that those two things can’t coexist. But in fact, they do coexist. I still think gen AI is stupid, and certainly cannot be trusted or counted on. And yet it is dangerous. And some of the danger actually stems from its stupidity. So for example, it’s not well-grounded in the world, so it’s easy for a bad actor to manipulate it into saying all kinds of garbage. Now, there might be a future AI that might be dangerous for a different reason, because it’s so smart and wily that it outfoxes the humans. But that’s not the current state of affairs.

You’ve said that generative AI is a bubble that will soon burst. Why do you think that?

Marcus: Let’s clarify: I don’t think generative AI is going to disappear. For some purposes, it is a fine method. You want to build autocomplete, it is the best method ever invented. But there’s a financial bubble because people are valuing AI companies as if they’re going to solve artificial general intelligence. In my view, it’s not realistic. I don’t think we’re anywhere near AGI. So then you’re left with, “Okay, what can you do with generative AI?”

Last year, because Sam Altman was such a good salesman, everybody fantasized that we were about to have AGI and that you could use this tool in every aspect of every corporation. And a whole bunch of companies spent a bunch of money testing generative AI out on all kinds of different things. So they spent 2023 doing that. And then what you’ve seen in 2024 are reports where researchers go to the users of Microsoft’s Copilot—not the coding tool, but the more general AI tool—and they’re like, “Yeah, it doesn’t really work that well.” There’s been a lot of reviews like that this last year.

The reality is, right now, the gen AI companies are actually losing money. OpenAI had an operating loss of something like $5 billion last year. Maybe you can sell $2 billion worth of gen AI to people who are experimenting. But unless they adopt it on a permanent basis and pay you a lot more money, it’s not going to work. I started calling OpenAI the possible WeWork of AI after it was valued at $86 billion. The math just didn’t make sense to me.

What would it take to convince you that you’re wrong? What would be the head-spinning moment?

Marcus: Well, I’ve made a lot of different claims, and all of them could be wrong. On the technical side, if someone could get a pure large language model to not hallucinate and to reason reliably all the time, I would be wrong about that very core claim that I have made about how these things work. So that would be one way of refuting me. It hasn’t happened yet, but it’s at least logically possible.

On the financial side, I could easily be wrong. But the thing about bubbles is that they’re mostly a function of psychology. Do I think the market is rational? No. So even if the stuff doesn’t make money for the next five years, people could keep pouring money into it.

The place that I’d like to prove me wrong is the U.S. Senate. They could get their act together, right? I’m running around saying, “They’re not moving fast enough,” but I would love to be proven wrong on that. In the book, I have a list of the 12 biggest risks of generative AI. If the Senate passed something that actually addressed all 12, then my cynicism would have been mislaid. I would feel like I’d wasted a year writing the book, and I would be very, very happy.

Why OpenAI’s new model is such a big deal

17 September 2024 at 10:59

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

Last weekend, I got married at a summer camp, and during the day our guests competed in a series of games inspired by the show Survivor that my now-wife and I orchestrated. When we were planning the games in August, we wanted one station to be a memory challenge, where our friends and family would have to memorize part of a poem and then relay it to their teammates so they could re-create it with a set of wooden tiles. 

I thought OpenAI’s GPT-4o, its leading model at the time, would be perfectly suited to help. I asked it to create a short wedding-themed poem, with the constraint that each letter could only appear a certain number of times so we could make sure teams would be able to reproduce it with the provided set of tiles. GPT-4o failed miserably. The model repeatedly insisted that its poem worked within the constraints, even though it didn’t. It would correctly count the letters only after the fact, while continuing to deliver poems that didn’t fit the prompt. Without the time to meticulously craft the verses by hand, we ditched the poem idea and instead challenged guests to memorize a series of shapes made from colored tiles. (That ended up being a total hit with our friends and family, who also competed in dodgeball, egg tosses, and capture the flag.)    

However, last week OpenAI released a new model called o1 (previously referred to under the code name “Strawberry” and, before that, Q*) that blows GPT-4o out of the water for this type of purpose

Unlike previous models that are well suited for language tasks like writing and editing, OpenAI o1 is focused on multistep “reasoning,” the type of process required for advanced mathematics, coding, or other STEM-based questions. It uses a “chain of thought” technique, according to OpenAI. “It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working,” the company wrote in a blog post on its website.

OpenAI’s tests point to resounding success. The model ranks in the 89th percentile on questions from the competitive coding organization Codeforces and would be among the top 500 high school students in the USA Math Olympiad, which covers geometry, number theory, and other math topics. The model is also trained to answer PhD-level questions in subjects ranging from astrophysics to organic chemistry. 

In math olympiad questions, the new model is 83.3% accurate, versus 13.4% for GPT-4o. In the PhD-level questions, it averaged 78% accuracy, compared with 69.7% from human experts and 56.1% from GPT-4o. (In light of these accomplishments, it’s unsurprising the new model was pretty good at writing a poem for our nuptial games, though still not perfect; it used more Ts and Ss than instructed to.)

So why does this matter? The bulk of LLM progress until now has been language-driven, resulting in chatbots or voice assistants that can interpret, analyze, and generate words. But in addition to getting lots of facts wrong, such LLMs have failed to demonstrate the types of skills required to solve important problems in fields like drug discovery, materials science, coding, or physics. OpenAI’s o1 is one of the first signs that LLMs might soon become genuinely helpful companions to human researchers in these fields. 

It’s a big deal because it brings “chain-of-thought” reasoning in an AI model to a mass audience, says Matt Welsh, an AI researcher and founder of the LLM startup Fixie. 

“The reasoning abilities are directly in the model, rather than one having to use separate tools to achieve similar results. My expectation is that it will raise the bar for what people expect AI models to be able to do,” Welsh says.

That said, it’s best to take OpenAI’s comparisons to “human-level skills” with a grain of salt, says Yves-Alexandre de Montjoye, an associate professor in math and computer science at Imperial College London. It’s very hard to meaningfully compare how LLMs and people go about tasks such as solving math problems from scratch.

Also, AI researchers say that measuring how well a model like o1 can “reason” is harder than it sounds. If it answers a given question correctly, is that because it successfully reasoned its way to the logical answer? Or was it aided by a sufficient starting point of knowledge built into the model? The model “still falls short when it comes to open-ended reasoning,” Google AI researcher François Chollet wrote on X.

Finally, there’s the price. This reasoning-heavy model doesn’t come cheap. Though access to some versions of the model is included in premium OpenAI subscriptions, developers using o1 through the API will pay three times as much as they pay for GPT-4o—$15 per 1 million input tokens in o1, versus $5 for GPT-4o. The new model also won’t be most users’ first pick for more language-heavy tasks, where GPT-4o continues to be the better option, according to OpenAI’s user surveys. 

What will it unlock? We won’t know until researchers and labs have the access, time, and budget to tinker with the new mode and find its limits. But it’s surely a sign that the race for models that can outreason humans has begun. 

Now read the rest of The Algorithm


Deeper learning

Chatbots can persuade people to stop believing in conspiracy theories

Researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity. 

Why this matters: The findings could represent an important step forward in how we engage with and educate people who espouse such baseless theories, says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI’s impacts on society. “They show that with the help of large language models, we can—I wouldn’t say solve it, but we can at least mitigate this problem,” he says. “It points out a way to make society better.” Read more from Rhiannon Williams here.

Bits and bytes

Google’s new tool lets large language models fact-check their responses

Called DataGemma, it uses two methods to help LLMs check their responses against reliable data and cite their sources more transparently to users. (MIT Technology Review)

Meet the radio-obsessed civilian shaping Ukraine’s drone defense 

Since Russia’s invasion, Serhii “Flash” Beskrestnov has become an influential, if sometimes controversial, force—sharing expert advice and intel on the ever-evolving technology that’s taken over the skies. His work may determine the future of Ukraine, and wars far beyond it. (MIT Technology Review)

Tech companies have joined a White House commitment to prevent AI-generated sexual abuse imagery

The pledges, signed by firms like OpenAI, Anthropic, and Microsoft, aim to “curb the creation of image-based sexual abuse.” The companies promise to set limits on what models will generate and to remove nude images from training data sets where possible.  (Fortune)

OpenAI is now valued at $150 billion

The valuation arose out of talks it’s currently engaged in to raise $6.5 billion. Given that OpenAI is becoming increasingly costly to operate, and could lose as much as $5 billion this year, it’s tricky to see how it all adds up. (The Information)

macOS 15 Sequoia: The Ars Technica review

18 September 2024 at 13:40
macOS 15 Sequoia: The Ars Technica review

Enlarge (credit: Apple)

The macOS 15 Sequoia update will inevitably be known as "the AI one" in retrospect, introducing, as it does, the first wave of "Apple Intelligence" features.

That's funny because none of that stuff is actually ready for the 15.0 release that's coming out today. A lot of it is coming "later this fall" in the 15.1 update, which Apple has been testing entirely separately from the 15.0 betas for weeks now. Some of it won't be ready until after that—rumors say image generation won't be ready until the end of the year—but in any case, none of it is ready for public consumption yet.

But the AI-free 15.0 release does give us a chance to evaluate all of the non-AI additions to macOS this year. Apple Intelligence is sucking up a lot of the media oxygen, but in most other ways, this is a typical 2020s-era macOS release, with one or two headliners, several quality-of-life tweaks, and some sparsely documented under-the-hood stuff that will subtly change how you experience the operating system.

Read 214 remaining paragraphs | Comments

Why we need an AI safety hotline

16 September 2024 at 11:00

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating certain risks, and it may help regulators weigh up the costs and benefits—and potentially block models from being released if they’re deemed too dangerous. But the accuracy and comprehensiveness of these tests leaves a lot to be desired. AI models may “sandbag” the evaluation—hiding some of their capabilities to avoid raising any safety concerns. The evaluations may also fail to reliably uncover the full set of risks posed by any one model. Evaluations likewise suffer from limited scope—current tests are unlikely to uncover all the risks that warrant further investigation. There’s also the question of who conducts the evaluations and how their biases may influence testing efforts. For those reasons, evaluations need to be used alongside other governance tools. 

One such tool could be internal reporting mechanisms within the labs. Ideally, employees should feel empowered to regularly and fully share their AI safety concerns with their colleagues, and they should feel those colleagues can then be counted on to act on the concerns. However, there’s growing evidence that, far from being promoted, open criticism is becoming rarer in AI labs. Just three months ago, 13 former and current workers from OpenAI and other labs penned an open letter expressing fear of retaliation if they attempt to disclose questionable corporate behaviors that fall short of breaking the law. 

How to sound the alarm

In theory, external whistleblower protections could play a valuable role in the detection of AI risks. These could protect employees fired for disclosing corporate actions, and they could help make up for inadequate internal reporting mechanisms. Nearly every state has a public policy exception to at-will employment termination—in other words, terminated employees can seek recourse against their employers if they were retaliated against for calling out unsafe or illegal corporate practices. However, in practice this exception offers employees few assurances. Judges tend to favor employers in whistleblower cases. The likelihood of AI labs’ surviving such suits seems particularly high given that society has yet to reach any sort of consensus as to what qualifies as unsafe AI development and deployment. 

These and other shortcomings explain why the aforementioned 13 AI workers, including ex-OpenAI employee William Saunders, called for a novel “right to warn.” Companies would have to offer employees an anonymous process for disclosing risk-related concerns to the lab’s board, a regulatory authority, and an independent third body made up of subject-matter experts. The ins and outs of this process have yet to be figured out, but it would presumably be a formal, bureaucratic mechanism. The board, regulator, and third party would all need to make a record of the disclosure. It’s likely that each body would then initiate some sort of investigation. Subsequent meetings and hearings also seem like a necessary part of the process. Yet if Saunders is to be taken at his word, what AI workers really want is something different. 

When Saunders went on the Big Technology Podcast to outline his ideal process for sharing safety concerns, his focus was not on formal avenues for reporting established risks. Instead, he indicated a desire for some intermediate, informal step. He wants a chance to receive neutral, expert feedback on whether a safety concern is substantial enough to go through a “high stakes” process such as a right-to-warn system. Current government regulators, as Saunders says, could not serve that role. 

For one thing, they likely lack the expertise to help an AI worker think through safety concerns. What’s more, few workers will pick up the phone if they know it’s a government official on the other end—that sort of call may be “very intimidating,” as Saunders himself said on the podcast. Instead, he envisages being able to call an expert to discuss his concerns. In an ideal scenario, he’d be told that the risk in question does not seem that severe or likely to materialize, freeing him up to return to whatever he was doing with more peace of mind. 

Lowering the stakes

What Saunders is asking for in this podcast isn’t a right to warn, then, as that suggests the employee is already convinced there’s unsafe or illegal activity afoot. What he’s really calling for is a gut check—an opportunity to verify whether a suspicion of unsafe or illegal behavior seems warranted. The stakes would be much lower, so the regulatory response could be lighter. The third party responsible for weighing up these gut checks could be a much more informal one. For example, AI PhD students, retired AI industry workers, and other individuals with AI expertise could volunteer for an AI safety hotline. They could be tasked with quickly and expertly discussing safety matters with employees via a confidential and anonymous phone conversation. Hotline volunteers would have familiarity with leading safety practices, as well as extensive knowledge of what options, such as right-to-warn mechanisms, may be available to the employee. 

As Saunders indicated, few employees will likely want to go from 0 to 100 with their safety concerns—straight from colleagues to the board or even a government body. They are much more likely to raise their issues if an intermediary, informal step is available.

Studying examples elsewhere

The details of how precisely an AI safety hotline would work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for instance, it may need some way to escalate the most urgent, verified reports to the appropriate authorities. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. How to recruit and retain volunteers is another key question. Given leading experts’ broad concern about AI risk, some may be willing to participate simply out of a desire to lend a hand. Should too few folks step forward, other incentives may be necessary. The essential first step, though, is acknowledging this missing piece in the puzzle of AI safety regulation. The next step is looking for models to emulate in building out the first AI hotline. 

One place to start is with ombudspersons. Other industries have recognized the value of identifying these neutral, independent individuals as resources for evaluating the seriousness of employee concerns. Ombudspersons exist in academia, nonprofits, and the private sector. The distinguishing attribute of these individuals and their staffers is neutrality—they have no incentive to favor one side or the other, and thus they’re more likely to be trusted by all. A glance at the use of ombudspersons in the federal government shows that when they are available, issues may be raised and resolved sooner than they would be otherwise.

This concept is relatively new. The US Department of Commerce established the first federal ombudsman in 1971. The office was tasked with helping citizens resolve disputes with the agency and investigate agency actions. Other agencies, including the Social Security Administration and the Internal Revenue Service, soon followed suit. A retrospective review of these early efforts concluded that effective ombudspersons can meaningfully improve citizen-government relations. On the whole, ombudspersons were associated with an uptick in voluntary compliance with regulations and cooperation with the government. 

An AI ombudsperson or safety hotline would surely have different tasks and staff from an ombudsperson in a federal agency. Nevertheless, the general concept is worthy of study by those advocating safeguards in the AI industry. 

A right to warn may play a role in getting AI safety concerns aired, but we need to set up more intermediate, informal steps as well. An AI safety hotline is low-hanging regulatory fruit. A pilot made up of volunteers could be organized in relatively short order and provide an immediate outlet for those, like Saunders, who merely want a sounding board.

Kevin Frazier is an assistant professor at St. Thomas University College of Law and senior research fellow in the Constitutional Studies Program at the University of Texas at Austin.

Amazon's Secret Weapon in Chip Design Is Amazon



Big-name makers of processors, especially those geared toward cloud-based AI, such as AMD and Nvidia, have been showing signs of wanting to own more of the business of computing, purchasing makers of software, interconnects, and servers. The hope is that control of the “full stack” will give them an edge in designing what their customers want.

Amazon Web Services (AWS) got there ahead of most of the competition, when they purchased chip designer Annapurna Labs in 2015 and proceeded to design CPUs, AI accelerators, servers, and data centers as a vertically-integrated operation. Ali Saidi, the technical lead for the Graviton series of CPUs, and Rami Sinno, director of engineering at Annapurna Labs, explained the advantage of vertically-integrated design and Amazon-scale and showed IEEE Spectrum around the company’s hardware testing labs in Austin, Tex., on 27 August.

Saidi and Sinno on:

What brought you to Amazon Web Services, Rami?

an older man in an eggplant colored polo shirt posing for a portrait Rami SinnoAWS

Rami Sinno: Amazon is my first vertically integrated company. And that was on purpose. I was working at Arm, and I was looking for the next adventure, looking at where the industry is heading and what I want my legacy to be. I looked at two things:

One is vertically integrated companies, because this is where most of the innovation is—the interesting stuff is happening when you control the full hardware and software stack and deliver directly to customers.

And the second thing is, I realized that machine learning, AI in general, is going to be very, very big. I didn’t know exactly which direction it was going to take, but I knew that there is something that is going to be generational, and I wanted to be part of that. I already had that experience prior when I was part of the group that was building the chips that go into the Blackberries; that was a fundamental shift in the industry. That feeling was incredible, to be part of something so big, so fundamental. And I thought, “Okay, I have another chance to be part of something fundamental.”

Does working at a vertically-integrated company require a different kind of chip design engineer?

Sinno: Absolutely. When I hire people, the interview process is going after people that have that mindset. Let me give you a specific example: Say I need a signal integrity engineer. (Signal integrity makes sure a signal going from point A to point B, wherever it is in the system, makes it there correctly.) Typically, you hire signal integrity engineers that have a lot of experience in analysis for signal integrity, that understand layout impacts, can do measurements in the lab. Well, this is not sufficient for our group, because we want our signal integrity engineers also to be coders. We want them to be able to take a workload or a test that will run at the system level and be able to modify it or build a new one from scratch in order to look at the signal integrity impact at the system level under workload. This is where being trained to be flexible, to think outside of the little box has paid off huge dividends in the way that we do development and the way we serve our customers.

“By the time that we get the silicon back, the software’s done” —Ali Saidi, Annapurna Labs

At the end of the day, our responsibility is to deliver complete servers in the data center directly for our customers. And if you think from that perspective, you’ll be able to optimize and innovate across the full stack. A design engineer or a test engineer should be able to look at the full picture because that’s his or her job, deliver the complete server to the data center and look where best to do optimization. It might not be at the transistor level or at the substrate level or at the board level. It could be something completely different. It could be purely software. And having that knowledge, having that visibility, will allow the engineers to be significantly more productive and delivery to the customer significantly faster. We’re not going to bang our head against the wall to optimize the transistor where three lines of code downstream will solve these problems, right?

Do you feel like people are trained in that way these days?

Sinno: We’ve had very good luck with recent college grads. Recent college grads, especially the past couple of years, have been absolutely phenomenal. I’m very, very pleased with the way that the education system is graduating the engineers and the computer scientists that are interested in the type of jobs that we have for them.

The other place that we have been super successful in finding the right people is at startups. They know what it takes, because at a startup, by definition, you have to do so many different things. People who’ve done startups before completely understand the culture and the mindset that we have at Amazon.

[back to top]

What brought you to AWS, Ali?

a man with a beard wearing a polka dotted button-up shirt posing for a portrait Ali SaidiAWS

Ali Saidi: I’ve been here about seven and a half years. When I joined AWS, I joined a secret project at the time. I was told: “We’re going to build some Arm servers. Tell no one.”

We started with Graviton 1. Graviton 1 was really the vehicle for us to prove that we could offer the same experience in AWS with a different architecture.

The cloud gave us an ability for a customer to try it in a very low-cost, low barrier of entry way and say, “Does it work for my workload?” So Graviton 1 was really just the vehicle demonstrate that we could do this, and to start signaling to the world that we want software around ARM servers to grow and that they’re going to be more relevant.

Graviton 2—announced in 2019—was kind of our first… what we think is a market-leading device that’s targeting general-purpose workloads, web servers, and those types of things.

It’s done very well. We have people running databases, web servers, key-value stores, lots of applications... When customers adopt Graviton, they bring one workload, and they see the benefits of bringing that one workload. And then the next question they ask is, “Well, I want to bring some more workloads. What should I bring?” There were some where it wasn’t powerful enough effectively, particularly around things like media encoding, taking videos and encoding them or re-encoding them or encoding them to multiple streams. It’s a very math-heavy operation and required more [single-instruction multiple data] bandwidth. We need cores that could do more math.

We also wanted to enable the [high-performance computing] market. So we have an instance type called HPC 7G where we’ve got customers like Formula One. They do computational fluid dynamics of how this car is going to disturb the air and how that affects following cars. It’s really just expanding the portfolio of applications. We did the same thing when we went to Graviton 4, which has 96 cores versus Graviton 3’s 64.

[back to top]

How do you know what to improve from one generation to the next?

Saidi: Far and wide, most customers find great success when they adopt Graviton. Occasionally, they see performance that isn’t the same level as their other migrations. They might say “I moved these three apps, and I got 20 percent higher performance; that’s great. But I moved this app over here, and I didn’t get any performance improvement. Why?” It’s really great to see the 20 percent. But for me, in the kind of weird way I am, the 0 percent is actually more interesting, because it gives us something to go and explore with them.

Most of our customers are very open to those kinds of engagements. So we can understand what their application is and build some kind of proxy for it. Or if it’s an internal workload, then we could just use the original software. And then we can use that to kind of close the loop and work on what the next generation of Graviton will have and how we’re going to enable better performance there.

What’s different about designing chips at AWS?

Saidi: In chip design, there are many different competing optimization points. You have all of these conflicting requirements, you have cost, you have scheduling, you’ve got power consumption, you’ve got size, what DRAM technologies are available and when you’re going to intersect them… It ends up being this fun, multifaceted optimization problem to figure out what’s the best thing that you can build in a timeframe. And you need to get it right.

One thing that we’ve done very well is taken our initial silicon to production.

How?

Saidi: This might sound weird, but I’ve seen other places where the software and the hardware people effectively don’t talk. The hardware and software people in Annapurna and AWS work together from day one. The software people are writing the software that will ultimately be the production software and firmware while the hardware is being developed in cooperation with the hardware engineers. By working together, we’re closing that iteration loop. When you are carrying the piece of hardware over to the software engineer’s desk your iteration loop is years and years. Here, we are iterating constantly. We’re running virtual machines in our emulators before we have the silicon ready. We are taking an emulation of [a complete system] and running most of the software we’re going to run.

So by the time that we get to the silicon back [from the foundry], the software’s done. And we’ve seen most of the software work at this point. So we have very high confidence that it’s going to work.

The other piece of it, I think, is just being absolutely laser-focused on what we are going to deliver. You get a lot of ideas, but your design resources are approximately fixed. No matter how many ideas I put in the bucket, I’m not going to be able to hire that many more people, and my budget’s probably fixed. So every idea I throw in the bucket is going to use some resources. And if that feature isn’t really important to the success of the project, I’m risking the rest of the project. And I think that’s a mistake that people frequently make.

Are those decisions easier in a vertically integrated situation?

Saidi: Certainly. We know we’re going to build a motherboard and a server and put it in a rack, and we know what that looks like… So we know the features we need. We’re not trying to build a superset product that could allow us to go into multiple markets. We’re laser-focused into one.

What else is unique about the AWS chip design environment?

Saidi: One thing that’s very interesting for AWS is that we’re the cloud and we’re also developing these chips in the cloud. We were the first company to really push on running [electronic design automation (EDA)] in the cloud. We changed the model from “I’ve got 80 servers and this is what I use for EDA” to “Today, I have 80 servers. If I want, tomorrow I can have 300. The next day, I can have 1,000.”

We can compress some of the time by varying the resources that we use. At the beginning of the project, we don’t need as many resources. We can turn a lot of stuff off and not pay for it effectively. As we get to the end of the project, now we need many more resources. And instead of saying, “Well, I can’t iterate this fast, because I’ve got this one machine, and it’s busy.” I can change that and instead say, “Well, I don’t want one machine; I’ll have 10 machines today.”

Instead of my iteration cycle being two days for a big design like this, instead of being even one day, with these 10 machines I can bring it down to three or four hours. That’s huge.

How important is Amazon.com as a customer?

Saidi: They have a wealth of workloads, and we obviously are the same company, so we have access to some of those workloads in ways that with third parties, we don’t. But we also have very close relationships with other external customers.

So last Prime Day, we said that 2,600 Amazon.com services were running on Graviton processors. This Prime Day, that number more than doubled to 5,800 services running on Graviton. And the retail side of Amazon used over 250,000 Graviton CPUs in support of the retail website and the services around that for Prime Day.

[back to top]

The AI accelerator team is colocated with the labs that test everything from chips through racks of servers. Why?

Sinno: So Annapurna Labs has multiple labs in multiple locations as well. This location here is in Austin… is one of the smaller labs. But what’s so interesting about the lab here in Austin is that you have all of the hardware and many software development engineers for machine learning servers and for Trainium and Inferentia [AWS’s AI chips] effectively co-located on this floor. For hardware developers, engineers, having the labs co-located on the same floor has been very, very effective. It speeds execution and iteration for delivery to the customers. This lab is set up to be self-sufficient with anything that we need to do, at the chip level, at the server level, at the board level. Because again, as I convey to our teams, our job is not the chip; our job is not the board; our job is the full server to the customer.

How does vertical integration help you design and test chips for data-center-scale deployment?

Sinno: It’s relatively easy to create a bar-raising server. Something that’s very high-performance, very low-power. If we create 10 of them, 100 of them, maybe 1,000 of them, it’s easy. You can cherry pick this, you can fix this, you can fix that. But the scale that the AWS is at is significantly higher. We need to train models that require 100,000 of these chips. 100,000! And for training, it’s not run in five minutes. It’s run in hours or days or weeks even. Those 100,000 chips have to be up for the duration. Everything that we do here is to get to that point.

We start from a “what are all the things that can go wrong?” mindset. And we implement all the things that we know. But when you were talking about cloud scale, there are always things that you have not thought of that come up. These are the 0.001-percent type issues.

In this case, we do the debug first in the fleet. And in certain cases, we have to do debugs in the lab to find the root cause. And if we can fix it immediately, we fix it immediately. Being vertically integrated, in many cases we can do a software fix for it. We use our agility to rush a fix while at the same time making sure that the next generation has it already figured out from the get go.

[back to top]

❌
❌