SAN FRANCISCOβOn Tuesday, TED AI 2024 kicked off its first day at San Francisco's Herbst Theater with a lineup of speakers that tackled AI's impact on science, art, and society. The two-day event brought a mix of researchers, entrepreneurs, lawyers, and other experts who painted a complex picture of AI with fairly minimal hype.
The second annual conference, organized by Walter and Sam De Brouwer, marked a notable shift from last year's broad existential debates and proclamations of AI as being "the new electricity." Rather than sweeping predictions about, say, looming artificial general intelligence (although there was still some of that, too), speakers mostly focused on immediate challenges: battles over training data rights, proposals for hardware-based regulation, debates about human-AI relationships, and the complex dynamics of workplace adoption.
The day's sessions covered a wide breadth of AI topics: physicist Carlo Rovelli explored consciousness and time, Project CETI researcher Patricia Sharma demonstrated attempts to use AI to decode whale communication, Recording Academy CEO Harvey Mason Jr. outlined music industry adaptation strategies, and even a few robots made appearances.
The annual conference focuses on how social media and similar platforms amplify hate speech, extremism, exploitation, misinformation, and disinformation, as well as what measures are being taken to protect people.
With the popularity of social media and the rise of artificial intelligence, content can be more easily created and shared online by individuals and bots, says
Andre Oboler, the general chair of IEEE DPSH. The IEEE senior member is CEO of the Online Hate Prevention Institute, which is based in Sydney. Oboler cautions that a lot of content online is fabricated, so some people are making economic, political, social, and health care decisions based on inaccurate information.
βAddressing the creation, propagation, and engagement of harmful digital information is a complex problem. It requires broad collaboration among various stakeholders including technologists; lawmakers and policymakers; nonprofit organizations; private sectors; and end users.β
Misinformation (which is false) and disinformation (which is intentionally false) also can propagate hate speech, discrimination, violent extremism, and child sexual abuse, he says, and can create hostile online environments, damaging peopleβs confidence in information and endangering their lives.
To help prevent harm, he says, cutting-edge technical solutions and changes in public policy are needed. At the conference, academic researchers and leaders from industry, government, and not-for-profit organizations are gathering to discuss steps being taken to protect individuals online.
βAddressing the creation, propagation, and engagement of harmful digital information is a complex problem,β Oboler says. βIt requires broad collaboration among various stakeholders including technologists; lawmakers and policymakers; nonprofit organizations; private sectors; and end users.
βThere is an emerging need for these stakeholders and researchers from multiple disciplines to have a joint forum to understand the challenges, exchange ideas, and explore possible solutions.β
To register for in-person and online conference attendance, visit the eventβs
website. Those who want to attend only the keynote panels can register for free access to the discussions. Attendees who register by 22 September and use the code 25off2we receive a 25 percent discount.
Check out highlights from the 2023 IEEE Conference on Digital Platforms and Societal Harms.