Normal view

There are new articles available, click to refresh the page.
Before yesterdayVox - Recode

A new era of technology coverage on Vox

6 March 2023 at 16:22
Vox logo_4x3
Vox

For something that’s defined by change, the world of technology feels extra disruptive lately. Artificial intelligence is making headlines on a regular basis. Electric vehicles are taking over the roads. Microchips are made in America again. For the techno-optimists out there, we’re finally living in a version of the science fiction-inspired future we were promised.

But our present is more complicated than that. The tech industry is facing a series of crossroads. The businesses that once seemed like unstoppable profit machines are starting to sputter, slowing the meteoric growth of tech giants as leaders in Washington target them for being too big. A changing global economy is bringing high-tech manufacturing jobs back to the United States, as office workers find themselves torn between returning to the office and striking out on their own. Our roads aren’t actually ready for all those electric vehicles, and the AI technology that’s taking Silicon Valley by storm comes with unexpected consequences we’re discovering in real time as it rolls out to the public. Some sci-fi future we’ve built for ourselves, the skeptics may say.

It’s long been Recode’s mission to help you, our readers, understand technological change so that you can understand how it’s affecting your life. When Recode joined forces with Vox in 2019, we set out to join our expertise in technology and media with Vox’s command of explanatory journalism. And we’re immensely proud of what we’ve accomplished. Looking ahead, however, we think we can serve you even better behind a more united front.

That’s why, starting today, we’re retiring the Recode branding and continuing our mission under the Vox banner. Over time, we’ve heard some feedback from readers who found Vox’s sub-brands confusing — the exact opposite of what Vox strives for — so this change will help us more clearly communicate to our audience what Vox covers. We’re also excited for our reporters to collaborate more with other teams at Vox — everyone from the politics wonks to the science nerds — as technology’s role in our lives continues to expand.

Vox will continue to explain how technology is changing the world and how it’s changing us. We’ll have the same reporters and continue to cover many of the same topics you’re used to seeing on Recode: the vibe shift in Silicon Valley, the power struggle between Big Tech and Washington, the future of work, all things media. You’ll also notice a new focus on covering innovation and transformation: technology’s role in fighting climate change, the reinvention of American cities, artificial intelligence’s creep into the mainstream.

Of course, our distinctive approach wouldn’t exist without the influence of the indomitable innovators Kara Swisher and Walt Mossberg, who launched Recode nearly a decade ago. Walt has since retired, and after stepping down as Recode’s editor-at-large in 2019, Kara has been focused on building out her podcasts with Vox Media: On with Kara Swisher and Pivot. We’re immensely grateful to Walt and Kara for their pioneering work in tech journalism, and their vision will continue to guide the work we do in this new era.

Expect some exciting things in the months to come. We’ll soon relaunch Peter Kafka’s popular podcast under a new name and with a new look. Vox Media will also continue to host the Code Conference, where you will find Vox writers on stage alongside some of the most important leaders in the industry.

We have a tremendous future to look forward to, one filled with paradigm shifts, progress, and probably a good dose of uncertainty about what it all means. At Vox, we’re excited to keep explaining the news and helping you understand how it’s relevant to you.

The exciting new AI transforming search — and maybe everything — explained

4 March 2023 at 12:00
A drawing of a man holding a smartphone with a picture of a brain on it over his eyes.
Malte Mueller/Getty Images

Generative AI is here. Let’s hope we’re ready.

The world’s first generative AI-powered search engine is here, and it’s in love with you. Or it thinks you’re kind of like Hitler. Or it’s gaslighting you into thinking it’s still 2022, a more innocent time when generative AI seemed more like a cool party trick than a powerful technology about to be unleashed on a world that might not be ready for it.

If you feel like you’ve been hearing a lot about generative AI, you’re not wrong. After a generative AI tool called ChatGPT went viral a few months ago, it seems everyone in Silicon Valley is trying to find a use for this new technology. Generative AI is essentially a more advanced and useful version of the conventional artificial intelligence that already helps power everything from autocomplete to Siri. The big difference is that generative AI can create new content, such as images, text, audio, video, and even code — usually from a prompt or command. It can write news articles, movie scripts, and poetry. It can make images out of some really specific parameters. And if you listen to some experts and developers, generative AI will eventually be able to make almost anything, including entire apps, from scratch. For now, the killer app for generative AI appears to be search.

One of the first major generative AI products for the consumer market is Microsoft’s new AI-infused Bing, which debuted in January to great fanfare. The new Bing uses generative AI in its web search function to return results that appear as longer, written answers culled from various internet sources instead of a list of links to relevant websites. There’s also a new accompanying chat feature that lets users have human-seeming conversations with an AI chatbot. Google, the undisputed king of search for decades now, is planning to release its own version of AI-powered search as well as a chatbot called Bard in the coming weeks, the company said just days after Microsoft announced the new Bing.

In other words, the AI wars have begun. And the battles may not just be over search engines. Generative AI is already starting to find its way into mainstream applications for everything from food shopping to social media.

Microsoft and Google are the biggest companies with public-facing generative AI products, but they aren’t the only ones working on it. Apple, Meta, and Amazon have their own AI initiatives, and there are plenty of startups and smaller companies developing generative AI or working it into their existing products. TikTok has a generative AI text-to-image system. Design platform Canva has one, too. An app called Lensa creates stylized selfies and portraits (sometimes with ample bosoms). And the open-source model Stable Diffusion can generate detailed and specific images in all kinds of styles from text prompts.

There’s a good chance we’re about to see a lot more generative AI showing up in a lot more applications, too. OpenAI, the AI developer that built the ChatGPT language model, recently announced the release of APIs, or application programming interfaces, for its ChatGPT and Whisper, a speech recognition model. Companies like Instacart and Shopify are already implementing this tech into their products, using generative AI to write shopping lists and offer recommendations. There’s no telling how many more apps might come up with novel ways to take advantage of what generative AI can do.

Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such. Venture capitalists, who are always looking for the next big tech thing, believe that generative AI can replace or automate a lot of creative processes, freeing up humans to do more complex tasks and making people more productive overall. But it’s not just creative work that generative AI can produce. It can help developers make software. It could improve education. It may be able to discover new drugs or become your therapist. It just might make our lives easier and better.

Or it could make things a lot worse. There are reasons to be concerned about the damage generative AI can do if it’s released to a society that isn’t ready for it — or if we ask the AI program to do something it isn’t ready for. How ethical or responsible generative AI technologies are is largely in the hands of the companies developing them, as there are few if any regulations or laws in place governing AI. This powerful technology could put millions of people out of work if it’s able to automate entire industries. It could spawn a destructive new era of misinformation. There are also concerns of bias due to a lack of diversity in the material and data that generative AI is trained on, or the people who are overseeing that training.

Nevertheless, powerful generative AI tools are making their way to the masses. If 2022 was the “year of generative AI,” 2023 may be the year that generative AI is actually put to use, ready or not.

The slow, then sudden, rise of generative AI

Conventional artificial intelligence is already integrated into a ton of products we use all the time, like autocomplete, voice assistants like Amazon’s Alexa, and even the recommendations for music or movies we might enjoy on streaming services. But generative AI is more sophisticated. It uses deep learning, or algorithms that create artificial neural networks that are meant to mimic how human brains process information and learn. And then those models are fed enormous amounts of data to train on. For example, large language models power things like ChatGPT, which train on text collected from around the internet until they learn to generate and mimic those kinds of texts and conversations upon request. Image models have been fed tons of images and captions that describe them in order to learn how to create new content based on prompts.

After years of development, most of it outside of public view, generative AI hit the mainstream in 2022 with the widespread releases of art and text models. Models like Stable Diffusion and DALL-E, which was released by OpenAI, were first to go viral, and they let anyone create new images from text prompts. Then came OpenAI’s ChatGPT (GPT stands for “generative pre-trained transformer”) which got everyone’s attention. This tool could create large, entirely new chunks of text from simple prompts. For the most part, ChatGPT worked really well, too — better than anything the world had seen before.

Though it’s one of many AI startups out there, OpenAI seems to have the most advanced or powerful products right now. Or at least, it’s the startup that has given the general public access to its services, thereby providing the most evidence of its progress in the generative AI field. This is a demonstration of its abilities as well as a source of even more data for OpenAI’s models to learn from.

OpenAI is also backed by some of the biggest names in Silicon Valley. It was founded in 2015 as a nonprofit research lab with $1 billion in support from the likes of Elon Musk, Reid Hoffman, Peter Thiel, Amazon, and former Y Combinator president Sam Altman, who is now the company’s CEO. OpenAI has since changed its structure to become a for-profit company but has yet to make a profit or even much by way of revenue. That’s not a problem yet, as OpenAI has gotten a considerable amount of funding from Microsoft, which began investing in OpenAI in 2019. And OpenAI is seizing on the wave of excitement for ChatGPT to promote its API services, which are not free. Neither is the company’s upcoming ChatGPT Plus service.

A drawing of a human hand reaching out to shake a robot hand. Malte Mueller/Getty Images

Other big tech companies have for years been working on their own generative AI initiatives. There’s Apple’s Gaudi, Meta’s LLaMA and Make-a-Scene, Amazon’s collaboration with Hugging Face, and Google’s LaMDA (which is good enough that one Google engineer thought it was sentient). But thanks to its early investment in OpenAI, Microsoft had access to the AI project everyone knew about and was trying out.

In January 2023, Microsoft announced it was giving $10 billion to OpenAI, bringing its total investment in the company to $13 billion. From that partnership, Microsoft has gotten what it hopes will be a real challenge to Google’s longtime dominance in web search: a new Bing powered by generative AI.

AI search will give us the first glimpse of how generative AI can be used in our everyday lives ... if it works

Tech companies and investors are willing to pour resources into generative AI because they hope that, eventually, it will be able to create or generate just about any kind of content humans ask for. Some of those aspirations may be a long way from becoming reality, but right now, it’s possible that generative AI will power the next evolution of the humble internet search.

After months of rumors that both Microsoft and Google were working on generative AI versions of their web search engines, Microsoft debuted its AI-integrated Bing in January in a splashy media event that showed off all the cool things it could do, thanks to OpenAI’s custom-built technology that powered it. Instead of entering a prompt for Bing to look up and return a list of relevant links, you could ask Bing a question and get a “complete answer” composed by Bing’s generative AI and culled from various sources on the web that you didn’t have to take the time to visit yourself. You could also use Bing’s chatbot to ask follow-up questions to better refine your search results.

Microsoft wants you to think the possibilities of these new tools are just about endless. And notably, Bing AI appeared to be ready for the general public when the company announced it last month. It’s now being rolled out to people on an ever-growing wait list and incorporated into other Microsoft products, like its Windows 11 operating system and Skype.

This poses a major threat to Google, which has had the search market sewn up for decades and makes most of its revenue from the ads placed alongside its search results. The new Bing could chip away at Google’s search dominance and its main moneymaker. And while Google has been working on its own generative AI models for years, its AI-powered search engine and corresponding chatbot, which it calls Bard, appear to be months away from debut. All of this suggests that, so far, Microsoft is winning the AI-powered search engine battle.

Or is it?

Once the new Bing made it to the masses, it quickly became apparent that the technology might not be ready for primetime after all. Right out of the gate, Bing made basic factual errors or made up stuff entirely, also known as “hallucinating.” What was perhaps more problematic, however, was that its chatbot was also saying some disturbing and weird things. One person asked Bing for movie showtimes, only to be told the movie hadn’t come out yet (it had) because the date was February 2022 (it wasn’t). The user insisted that it was, at that time, February 2023. Bing AI responded by telling the user they were being rude, had “bad intentions,” and had lost Bing’s “trust and respect.” A New York Times reporter pronounced Bing “not ready for human contact” after its chatbot — with a considerable amount of prodding from the reporter — began expressing its “desires,” one of which was the reporter himself. Bing also told an AP reporter that he was acting like Hitler.

In response to the bad press, Microsoft has tried to put some limits and guardrails on Bing, like limiting the number of interactions one person can have with its chatbot. But the question remains: How thoroughly could Microsoft have tested Bing’s chatbot before releasing it if it took only a matter of days for users to get it to give such wild responses?

Google, on the other hand, may have been watching this all unfold with a certain sense of glee. Its limited Bard rollout hasn’t exactly gone perfectly, but Bard hasn’t compared any of its users to one of the most reviled people in human history, either. At least, not that we know of. Not yet.

Again, Microsoft and Google aren’t the only companies working on generative AI, but their public releases have put more pressure on others to roll out their offerings as soon as possible, too. ChatGPT’s release and OpenAI’s partnership with Microsoft likely accelerated Google’s plans. Meanwhile, Meta is working to get its generative AI into as many of its own products as possible and just released a large language model of its own, called Large Language Model Meta AI, or LLaMA.

With the rollout of APIs that help developers add ChatGPT and Whisper to their applications, OpenAI seems eager to expand quickly. Some of these integrations seem pretty useful, too. Snapchat now has a chatbot called “My AI” for its paid subscribers, with plans to offer it to everyone soon. Initial reports say it’s just ChatGPT in Snapchat, but with even more restrictions about what it will talk about (no swearing, sex, or violence). Instacart will use ChatGPT in a feature called “Ask Instacart” that can answer customers’ questions about food. And Shopify’s Shop app has a ChatGPT-powered assistant to make personalized recommendations from the brands and stores that use the platform.

Generative AI is here to stay, but we don’t yet know if that’s for the best

Bing AI’s problems were just a glimpse of how generative AI can go wrong and have potentially disastrous consequences. That’s why pretty much every company that’s in the field of AI goes out of its way to reassure the public that it’s being very responsible with its products and taking great care before unleashing them on the world. Yet for all of their stated commitment to “building AI systems and products that are trustworthy and safe,” Microsoft and OpenAI either didn’t or couldn’t ensure a Bing chatbot could live up to those principles, but they released it anyway. Google and Meta, by contrast, were very conservative about releasing their products — until Microsoft and OpenAI gave them a push.

Error-prone generative AI is being put out there by many other companies that have promised to be careful. Some text-to-image models are infamous for producing images with missing or extra limbs. There are chatbots that confidently declare the winner of a Super Bowl that has yet to be played. These mistakes are funny as isolated incidents, but we’ve already seen one publication rely on generative AI to write authoritative articles with significant factual errors.

A drawing of hands with 12 fingers using a laptop. Malte Mueller/Getty Images

These screw-ups have been happening for years. Microsoft had one high-profile AI chatbot flop with its 2016 release of Tay, which Twitter users almost immediately trained to say some really offensive things. Microsoft quickly took it offline. Meta’s Blenderbot is based on a large language model and was released in August 2022. It didn’t go well. The bot seemed to hate Facebook, got racist and antisemitic, and wasn’t very accurate. It’s still available to try out, but after seeing what ChatGPT can do, it feels like a clunky, slow, and weird step backward.

There are even more serious concerns. Generative AI threatens to put a lot of people out of work if it’s good enough to replace them. It could have a profound impact on education. There are also questions of legalities over the material AI developers are using to train their models, which is typically scraped from millions of sources that the developers don’t have the rights to. And there are questions of bias both in the material that AI models are training on and the people who are training them.

On the other side, some conservative bomb-throwers have accused generative AI developers of moderating their platforms’ outputs too much and making them “woke” and biased against the right wing. To that end, Musk, the self-proclaimed free-speech absolutist and OpenAI critic as well as an early investor, is reportedly considering developing a ChatGPT rival that won’t have content restrictions or be trained on supposedly “woke” material.

And then there’s the fear not of generative AI but of the technology it could lead to: artificial general intelligence. AGI can learn and think and solve problems like a human, if not better. This has given rise to science fiction-based fears that AGI will lead to an army of super-robots that quickly realize they have no need for humans and either turn us into slaves or wipe us out entirely.

There are plenty of reasons to be optimistic about generative AI’s future, too. It’s a powerful technology with a ton of potential, and we’ve still seen relatively little of what it can do and who it can help. Silicon Valley clearly sees this potential, and venture capitalists like Andreessen Horowitz and Sequoia seem to be all-in. OpenAI is valued at nearly $30 billion, despite not having yet proved itself as a revenue generator.

Generative AI has the power to upend a lot of things, but that doesn’t necessarily mean it’ll make them worse. Its ability to automate tasks may give humans more time to focus on the stuff that can’t be done by increasingly sophisticated machines, as has been true for technological advances before it. And in the near future — once the bugs are worked out — it could make searching the web better. In the years and decades to come, it might even make everything else better, too.

Oh, and in case you were wondering: No, generative AI did not write this explainer.

9 questions about the threats to ban TikTok, answered

8 March 2023 at 17:59
A teen sitting on the floor in a hallway holding a phone.
A TikTok ban would surely upset many of the nation’s teens. | iStockphoto/Getty Images

So you heard TikTok’s being banned. Here’s what’s actually happening.

Since its introduction to the US in 2018, TikTok has been fighting for its right to exist. First, the company struggled to convince the public that it wasn’t just for pre-teens making cringey memes; then it had to make the case that it wasn’t responsible for the platform’s rampant misinformation (or cultural appropriation … or pro-anorexia content … or potentially deadly trends … or general creepiness, etc). But mostly, and especially over the past three years, TikTok has been fighting against increased scrutiny from US lawmakers about its ties to the Chinese government via its China-based parent company, ByteDance.

Some of the scrutiny has resulted in partial TikTok bans on government-owned devices in the federal and the majority of state governments. Several bills have now been introduced that would ban TikTok outright. On March 7, a bipartisan group of 12 senators unveiled what might be the biggest threat to TikTok yet: a bill that would lay the groundwork for the president to ban the app.

But banning TikTok isn’t as simple as flipping a switch and deleting the app from every American’s phone, even if this new bill does pass. It’s a complex knot of technical and political decisions that could have consequences for US-China relations, for the cottage industry of influencers that has blossomed over the past five years, and for culture at large. The whole thing could also be overblown.

The thing is, nobody really knows if a TikTok ban, however broad or all-encompassing, will even happen at all or how it would work if it did. It’s been three years since the US government has seriously begun considering the possibility, but the future remains just as murky as ever. Here’s what we know so far.

1. Do politicians even use TikTok? Do they know how it works or what they’re trying to ban?

Among the challenges lawmakers face in trying to ban TikTok outright is a public relations problem. Americans already think their government leaders are too old, ill-equipped to deal with modern tech, and generally out of touch. A kind of tradition has even emerged whenever Congress tries to do oversight of Big Tech: A committee will convene a hearing, tech CEOs will show up, and then lawmakers make fools of themselves by asking questions that reveal how little they know about the platforms they’re trying to rein in.

Congress has never heard from TikTok’s CEO, Shou Zi Chew, in a public committee hearing before, but representatives will get their chance this month. Unlike with many of the American social media companies they’ve scrutinized before, few members of Congress have extensive experience with TikTok. Few use it for campaign purposes, and even fewer use it for official purposes. Though at least a few dozen members have some kind of account, most don’t have big followings. There are some notable exceptions: Sen. Bernie Sanders, and Reps. Katie Porter of California, Jeff Jackson of North Carolina, and Ilhan Omar of Minnesota use it frequently for official and campaign reasons and have big followings, while Sens. Jon Ossoff of Georgia and Ed Markey of Massachusetts are inactive on it after using it extensively during their campaigns in 2020 and 2021. —Christian Paz

2. Who is behind these efforts? Who is trying to ban TikTok or trying to impose restrictions?

While TikTok doesn’t have vocal defenders in Congress, it does have a long list of vocal antagonists from across the country, who span party and ideological lines in both the Senate and the House.

The leading Republicans hoping to ban TikTok are Sens. Marco Rubio of Florida and Josh Hawley of Missouri, and Rep. Mike Gallagher of Wisconsin, who is the new chairman of the House select committee on competition with China. All three have introduced some kind of legislation attempting to ban the app or force its parent company ByteDance to sell the platform to an American company. Many more Republicans in both chambers who are critics of China, like Sen. Tom Cotton of Arkansas and Ted Cruz of Texas, endorse some kind of tougher restriction on the app.

Independent Sen. Angus King of Maine has also joined Rubio in introducing legislation that would ban the app.

Most, but not all, Democrats have been reluctant to support a ban, saying they would prefer a broader approach. In the House, Gallagher’s Democratic counterpart, Rep. Raja Krishnamoorthi of Illinois, has also called for a ban or tougher restrictions, though he doesn’t think a ban will happen this year.

Meanwhile, a bipartisan group of senators is offering a different option with the recently introduced Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act. Led by Sens. Mark Warner of Virginia, a Democrat, and John Thune of South Dakota, a Republican, it isn’t an outright TikTok ban. Instead, it gives the government the authority to mitigate national security threats posed by technologies from hostile countries, up to a ban. TikTok would be subject to this bill. Warner, who runs the Senate Intelligence Committee, is perhaps the most vocal Democrat on the perceived dangers of TikTok, but had held off on signing onto a bill that would ban it specifically. —Christian Paz

3. What is the relationship between TikTok and the Chinese government? Do they have users’ info?

If you ask TikTok, the company will tell you there is no relationship and that it has not and would not give US user data to the Chinese government.

But TikTok is owned by ByteDance, a company based in Beijing that is subject to Chinese laws. Those laws compel businesses to assist the government whenever it asks, which many believe would force ByteDance to give the Chinese government any user data it has access to whenever it asks for it. Or it could be ordered to push certain kinds of content, like propaganda or disinformation, on American users.

We don’t know if this has actually happened at this point. We only know that it could, assuming ByteDance even has access to TikTok’s US user data and algorithms. TikTok has been working hard to convince everyone that it has protections in place that wall off US user data from ByteDance and, by extension, the Chinese government. —Sara Morrison

4. What happens to people whose income comes from TikTok? If there is a ban, is it even possible for creators to find similar success on Reels or Shorts or other platforms?

Most people who’ve counted on TikTok as their main source of revenue have long been prepared for a possible ban. Fifteen years into the influencer industry, it’s old hat that, eventually, social media platforms will betray their most loyal users in one way or another. Plus, after President Trump attempted a ban in the summer of 2020, many established TikTokers diversified their online presence by focusing more of their efforts on other platforms like Instagram Reels or YouTube Shorts.

That doesn’t mean that losing TikTok won’t hurt influencers. No other social platform is quite as good as TikTok at turning a completely unknown person or brand into a global superstar, thanks to its emphasis on discovery versus keeping people up to date on the users they already follow. Which means that without TikTok, it’ll be far more difficult for aspiring influencers to see the kind of overnight success enjoyed by OG TikTokers.

The good news is that there’s likely more money to be made on other platforms, specifically Instagram Reels. Creators can sometimes make tens of thousands of dollars per month from Instagram’s creator fund, which rewards users with money based on the number of views their videos get. Instagram is also viewed as a safer, more predictable platform for influencers in their dealings with brands, which can use an influencer’s previous metrics to set a fair rate for the work. (It’s a different story on TikTok, where even a post by someone with millions of followers could get buried by the algorithm, and it’s less evident that past success will continue in the future.) —Rebecca Jennings

5. What does the TikTok ban look like to me, the user? Am I going to get arrested for using TikTok?

Almost certainly not. The most likely way a ban would happen would be through an executive order that cites national security grounds to forbid business transactions with TikTok. Those transactions would likely be defined as services that facilitate the app’s operations and distribution. Which means you might have a much harder time finding and using TikTok, but you won’t go to jail if you do. —Sara Morrison

6. How is it enforced? What does the TikTok ban look like to the App Store and other businesses?

The most viable path as of now is using the International Emergency Economic Powers Act, which gives the president broader powers than he otherwise has. President Trump used this when he tried to ban TikTok in 2020, and lawmakers have since introduced TikTok-banning bills that essentially call for the current president to try again, but this time with additional measures in place that might avoid the court battles that stalled Trump’s attempt.

Trump’s ban attempt does give us some guidance on what such a ban would look like, however. The Trump administration spelled out some examples of banned transactions, including app stores not being allowed to carry it and internet hosting services not being allowed to host it. If you have an iPhone, it’s exceedingly difficult to get a native app on your phone that isn’t allowed in Apple’s App Store — or to get updates for that app if you downloaded it before this hypothetical ban came down. It’s also conceivable that companies would be prohibited from advertising on the app and content creators wouldn’t be able to use TikTok’s monetization tools.

There are considerable civil and criminal penalties for violating the IEEPA. Don’t expect Apple or Google or Mr. Beast to do so.

The RESTRICT Act would give the president another way to ban TikTok, as it gives the Commerce Department the authority to review and investigate information and communication technology from countries deemed to be adversaries, which would include TikTok and China. The commerce secretary could then recommend to the president which actions should be taken to mitigate any national security threat these technologies pose, up to banning them. The White House supports this bill. But a lot of things would have to happen before it’s a viable option to ban TikTok. First and foremost, the bill would have to actually pass. —Sara Morrison

7. On what grounds would TikTok be reinstated? Are there any changes big enough that would make it “safe” in the eyes of the US government?

TikTok is already trying to make those changes to convince a multi-agency government panel that it can operate in the US without being a national security risk. If that panel, called the Committee on Foreign Investments in the United States (CFIUS), can’t reach an agreement with TikTok, then it’s doubtful there’s anything more TikTok can do.

Well, there is one thing: If ByteDance sold TikTok off to an American company — something that was considered back in the Trump administration — most of its issues would go away. But even if ByteDance wanted to sell TikTok, it may not be allowed to. The Chinese government would have to approve such a sale, and it’s made it pretty clear that it won’t. —Sara Morrison

8. Is there any kind of precedent for banning apps?

China and other countries do ban US apps. The TikTok app doesn’t even exist in China. It has a domestic version, called Douyin, instead. TikTok also isn’t in India, which banned it in 2020. So there is precedent for other countries banning apps, including TikTok. But these are different countries with different laws. That kind of censorship doesn’t really fly here. President Trump’s attempt to ban TikTok in 2020 wasn’t going well in the courts, but we never got an ultimate decision because Trump lost the election and the Biden administration rescinded the order.

The closest thing we have to the TikTok debacle is probably Grindr. A Chinese company bought the gay dating app in 2018, only to be forced by CFIUS to sell it off the next year. It did, thus avoiding a ban. So we don’t know how a TikTok ban would play out if it came down to it. —Sara Morrison

9. How overblown is this?

At the moment, there’s no indication that the Chinese government has asked for private data of American citizens from ByteDance, or that the parent company has provided that information to Chinese government officials. But American user data has reportedly been accessed by China-based employees of ByteDance, according to a BuzzFeed News investigation last year. The company has also set up protocols under which employees abroad could remotely access American data. The company stresses that this is no different from how other “global companies” operate and that it is moving to funnel all US data through American servers. But the possibility of the Chinese government having access to this data at some point is fueling the national security concerns in the US.

This doesn’t speak to the other reasons driving government scrutiny of the app: data privacy and mental health. Some elected officials would like to see stricter rules and regulations in place limiting the kind of information that younger Americans have to give up when using TikTok and other platforms, (like Markey, the senator from Massachusetts), while others would like a closer look at limits on when children can use the app as part of broader regulations on Big Tech. Democratic members of Congress have also cited concerns with how much time children are spending online, potentially detrimental effects of social media, including TikTok, on children, and the greater mental health challenges younger Americans are facing today. TikTok is already making efforts to fend off this criticism: At the start of March, they announced new screen time limits for users under the age of 17. But even those measures are more like suggestions. —Christian Paz

Update, March 8, 12 pm ET: This story, originally published on March 2, has been updated with news of the RESTRICT Act.

TikTok isn’t really limiting kids’ time on its app

2 March 2023 at 13:00
A girl looking at her phone while hiding under the covers in bed.
TikTok’s younger users will now be told when they’ve been watching for a while. | Westend61/Getty Images

Teens can still click right on through the new screen time limit.

Amid growing concerns (and lawsuits) about social media’s impact on the mental health of children, TikTok announced on Wednesday that it’s setting a 60-minute time limit on screen time for users under 18 and adding some new parental controls. Those “limits,” however, are really more like suggestions. There are ways young users can continue to use the app even after the screen time limits have passed.

The news comes amid a larger discussion about the harms of social media on younger people as well as an enormous amount of scrutiny on TikTok itself over its ties to China. And while the updates make TikTok look like it’s taking the lead on mitigating those harms, it likely won’t be enough to assuage the national security concerns many lawmakers have (or say they have) about TikTok. They might not even be enough to assuage concerns they have over social media harm to children.

In the coming weeks, minor users will have a 60-minute screen time limit applied by default, at which point a prompt will pop up in the app notifying them and giving them the option to continue.

For users under 13, a parent or guardian will have to enter a passcode every 30 minutes to give their kid additional screen time. No parent code, no TikTok.

But users aged 13 to 17 can enter their own passcode and continue to use the app. They can also opt out of the 60-minute default screen time limit, but if they spend more than 100 minutes on TikTok a day they will be forced to set their own limits — which they can then bypass with their code. They’ll also get a weekly recap of how much time they’ve spent on the app. TikTok believes these measures will make teens more aware of the time they spend on the app, as they’re forced to be more active in choosing to do so.

Finally, parents who link their TikTok accounts to their children’s will have some additional controls and information, like knowing how much time their kids spend on the app and how often it’s been opened, setting times to mute notifications, and being able to set custom time limits for different days.

TikTok’s new screen time controls. TikTok
New controls for your (or your kid’s) TikTok experience.

The Tech Oversight Project, a Big Tech accountability group, was not impressed by TikTok’s announcement, calling it “a fake ploy to make parents feel safe without actually making their product safe.”

“Companies like YouTube, Instagram, and TikTok centered their business models on getting kids addicted to the platforms and increasing their screen time to sell them ads,” Kyle Morse, Tech Oversight Project’s deputy executive director, said in a statement. “By design, tech platforms do not care about the well-being of children and teens.”

TikTok has long been criticized for its addictive nature, which causes some users to spend hours mindlessly scrolling through the app. It has implemented various screen time management tools throughout the years, and currently allows users to set their own time limits and put up reminders to take breaks or go to sleep. These new controls will let them customize those settings even more. TikTok says those controls will soon be available to adult users, too, but adults won’t be getting that time limit notice by default like the kids will.

TikTok is one of several social media apps that has introduced options for minor users. Meta allows parents to limit how much time their kids spend on Instagram, for instance. And the devices kids use these apps on also have various options for parents. But these aren’t enabled by default like TikTok’s 60-minute notice will be.

This all comes as lawmakers appear to be getting serious about laws that would regulate if and how children use social media. President Biden has said in both of his State of the Union addresses that social media platforms are profiting from “experimenting” on children and must be held accountable. Sen. Josh Hawley (R-MO) wants to ban children under 16 from using social media at all. On the less extreme side, Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) will be reintroducing a bipartisan bill called the Kids Online Safety Act, which would force social media platforms to have controls over kids’ usage and give parents the ability to set them.

TikTok specifically is also facing the possibility that it will be banned in the US, as lawmakers who are concerned over its China-based parent company have been increasingly vocal about the app and are introducing bills to ban it, believing China could use it to access US user data or push propaganda or misinformation onto US users. TikTok is already banned on federal government devices as well as government-owned devices in the majority of states. The company is currently in talks with the government on an agreement that would alleviate national security concerns and let it continue to operate in the country, but that process has dragged on for several years.

In the meantime, TikTok can say it’s taken the lead on controlling kids’ screen time with its default setting, even if its mostly voluntary measures don’t really do all that much. That might — but probably won’t — win it some points with lawmakers who want to ban it entirely. And that would be the biggest screen time control of them all.

This story was first published in the Recode newsletter. Sign up here so you don’t miss the next one!

Section 230, the internet law the Supreme Court could change, explained

23 February 2023 at 21:07
The US Supreme Court building exterior, seen from behind barricades.
The Supreme Court is considering two cases that could change the internet as we know it. | Eric Lee/Bloomberg via Getty Images

The pillar of internet free speech seems to be everyone’s target.

You may have never heard of it, but Section 230 of the Communications Decency Act is the legal backbone of the internet. The law was created almost 30 years ago to protect internet platforms from liability for many of the things third parties say or do on them.

Decades later, it’s never been more controversial. People from both political parties and all three branches of government have threatened to reform or even repeal it. The debate centers around whether we should reconsider a law from the internet’s infancy that was meant to help struggling websites and internet-based companies grow. After all, these internet-based businesses are now some of the biggest and most powerful in the world, and users’ ability to speak freely on them bears much bigger consequences.

While President Biden pushes Congress to pass laws to reform Section 230, its fate may lie in the hands of the judicial branch, as the Supreme Court is considering two cases — one involving YouTube and Google, another targeting Twitter — that could significantly change the law and, therefore, the internet it helped create.

Section 230 says that internet platforms hosting third-party content are not liable for what those third parties post (with a few exceptions). That third-party content could include things like a news outlet’s reader comments, tweets on Twitter, posts on Facebook, photos on Instagram, or reviews on Yelp. If a Yelp reviewer were to post something defamatory about a business, for example, the business could sue the reviewer for libel, but thanks to Section 230, it couldn’t sue Yelp.

Without Section 230’s protections, the internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely go dark. A repeal of Section 230 wouldn’t just affect the big platforms that seem to get all the negative attention, either. It could affect websites of all sizes and online discourse.

Section 230’s salacious origins

In the early ’90s, the internet was still in its relatively unregulated infancy. There was a lot of porn floating around, and anyone, including impressionable children, could easily find and see it. This alarmed some lawmakers. In an attempt to regulate this situation, in 1995 lawmakers introduced a bipartisan bill called the Communications Decency Act, which would extend laws governing obscene and indecent use of telephone services to the internet. This would also make websites and platforms responsible for any indecent or obscene things their users posted.

In the midst of this was a lawsuit between two companies you might recognize: Stratton Oakmont and Prodigy. The former is featured in The Wolf of Wall Street, and the latter was a pioneer of the early internet. But in 1994, Stratton Oakmont sued Prodigy for defamation after an anonymous user claimed on a Prodigy bulletin board that the financial company’s president engaged in fraudulent acts. The court ruled in Stratton Oakmont’s favor, saying that because Prodigy moderated posts on its forums, it exercised editorial control that made it just as liable for the speech on its platform as the people who actually made that speech. Meanwhile, Prodigy’s rival online service, Compuserve, was found not liable for a user’s speech in an earlier case because Compuserve didn’t moderate content.

Fearing that the Communications Decency Act would stop the burgeoning internet in its tracks, and mindful of the Prodigy decision, then-Rep. (now Sen.) Ron Wyden and Rep. Chris Cox authored an amendment to CDA that said “interactive computer services” were not responsible for what their users posted, even if those services engaged in some moderation of that third-party content.

“What I was struck by then is that if somebody owned a website or a blog, they could be held personally liable for something posted on their site,” Wyden told Vox’s Emily Stewart in 2019. “And I said then — and it’s the heart of my concern now — if that’s the case, it will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.”

As the beginning of Section 230 says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” These are considered by some to be the 26 words that created the internet, but the law says more than that.

Section 230 also allows those services to “restrict access” to any content they deem objectionable. In other words, the platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument frequently employed by people who are suspended or banned from these platforms — that their Constitutional right to free speech has been violated — doesn’t apply. Wyden likens the dual nature of Section 230 to a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

The Communications Decency Act was signed into law in 1996. The indecency and obscenity provisions about transmitting porn to minors were immediately challenged by civil liberty groups and struck down by the Supreme Court, which said they were too restrictive of free speech. Section 230 stayed, and so a law that was initially meant to restrict free speech on the internet instead became the law that protected it.

This protection has allowed the internet to thrive. Think about it: Websites like Facebook, Reddit, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. On the other hand, a website that didn’t moderate anything at all would quickly become a spam-filled cesspool that few people would want to swim in.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet to flourish. Others say it allows platforms that have become hugely influential and important to suppress and censor speech based on their own whims or supposed political biases. Depending on who you talk to, internet platforms are either using the sword too much or not enough. Either way, they’re hiding behind the shield to protect themselves from lawsuits while they do it. Though it has been a law for nearly three decades, Section 230’s existence may have never been as precarious as it is now.

The Supreme Court might determine Section 230’s fate

Justice Clarence Thomas has made no secret of his desire for the court to consider Section 230, saying in multiple opinions that he believes lower courts have interpreted it to give too-broad protections to what have become very powerful companies. He got his wish in February 2023, when the court heard two similar cases that include it. In both, plaintiffs argued that their family members were killed by terrorists who posted content on those platforms. In the first, Gonzalez v. Google, the family of a woman killed in a 2015 terrorist attack in France said YouTube promoted ISIS videos and sold advertising on them, thereby materially supporting ISIS. In Twitter v. Taamneh, the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content, which is in violation of the Justice Against Sponsors of Terrorism Act — and could then mean that Section 230 doesn’t apply to such content.

These cases give the Supreme Court the chance to reshape, redefine, or even repeal the foundational law of the internet, which could fundamentally change it. And while the Supreme Court chose to take these cases on, it’s not certain that they’ll rule in favor of the plaintiffs. In oral arguments in late February, several justices didn’t seem too convinced during the Gonzalez v. Google arguments that they could or should, especially considering the monumental possible consequences and impact of such a decision. In Twitter v. Taamneh, the justices focused more on if and how the Sponsors of Terrorism law applied to tweets than they did on Section 230. The rulings are expected in June.

In the meantime, don’t expect the original authors of Section 230 to go away quietly. Wyden and Cox submitted an amicus brief to the Supreme Court for the Gonzalez case, where they said: “The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Given the enormous volume of content created by Internet users today, Section 230’s protection is even more important now than when the statute was enacted.”

Congress and presidents are getting sick of Section 230, too

In 2018, two bills — the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA) — were signed into law, which changed parts of Section 230. The updates mean that platforms can now be deemed responsible for prostitution ads posted by third parties. These changes were ostensibly meant to make it easier for authorities to go after websites that were used for sex trafficking, but it did so by carving out an exception to Section 230. That could open the door to even more exceptions in the future.

Amid all of this was a growing public sentiment that social media platforms like Twitter and Facebook were becoming too powerful. In the minds of many, Facebook even influenced the outcome of the 2016 presidential election by offering up its user data to shady outfits like Cambridge Analytica. There were also allegations of anti-conservative bias. Right-wing figures who once rode the internet’s relative lack of moderation to fame and fortune were being held accountable for various infringements of hateful content rules and kicked off the very platforms that helped create them. Alex Jones and his expulsion from Facebook and other social media platforms — even Twitter under Elon Musk won’t let him back — is perhaps the best example of this.

In a 2018 op-ed, Sen. Ted Cruz (R-TX) claimed that Section 230 required the internet platforms it was designed to protect to be “neutral public forums.” The law doesn’t actually say that, but many Republican lawmakers have introduced legislation that would fulfill that promise. On the other side, Democrats have introduced bills that would hold social media platforms accountable if they didn’t do more to prevent harmful content or if their algorithms promoted it.

There are some bipartisan efforts to change Section 230, too. The EARN IT Act from Sens. Lindsey Graham (R-SC) and Richard Blumenthal (D-CT), for example, would remove Section 230 immunity from platforms that didn’t follow a set of best practices to detect and remove child sexual abuse material. The partisan bills haven’t really gotten anywhere in Congress. But EARN IT, which was introduced in the last two sessions, was passed out of committee in the Senate and ready for a Senate floor vote. That vote never came, but Blumenthal and Graham have already signaled that they plan to reintroduce EARN IT this session for a third try.

In the executive branch, former President Trump became a very vocal critic of Section 230 in 2020 after Twitter and Facebook started deleting and tagging his posts that contained inaccuracies about Covid-19 and mail-in voting. He issued an executive order that said Section 230 protections should only apply to platforms that have “good faith” moderation, and then called on the FCC to make rules about what constituted good faith. This didn’t happen, and President Biden revoked the executive order months after taking office.

But Biden isn’t a fan of Section 230, either. During his presidential campaign, he said he wanted it repealed. As president, Biden has said he wants it to be reformed by Congress. Until Congress can agree on what’s wrong with Section 230, however, it doesn’t look likely that they’ll pass a law that significantly changes it.

However, some Republican states have been making their own anti-Section 230 moves. In 2021, Florida passed the Stop Social Media Censorship Act, which prohibits certain social media platforms from banning politicians or media outlets. That same year, Texas passed HB 20, which forbids large platforms from removing or moderating content based on a user’s viewpoint.

Neither law is currently in effect. A federal judge blocked the Florida law in 2022 due to the possibility of it violating free speech laws as well as Section 230. The state has appealed to the Supreme Court. The Texas law has made a little more progress. A district court blocked the law last year, and then the Fifth Circuit controversially reversed that decision before deciding to stay the law in order to give the Supreme Court the chance to take the case. We’re still waiting to see if it does.

If Section 230 were to be repealed — or even significantly reformed — it really could change the internet as we know it. It remains to be seen if that’s for better or for worse.

Update, February 23, 2023, 3 pm ET: This story, originally published on May 28, 2020, has been updated several times, most recently with the latest news from the Supreme Court cases related to Section 230.

The new Congress is enlisting kids in its ongoing fight with Big Tech

15 February 2023 at 12:10
Senators Lindsey Graham and Richard Blumenthal.
Sens. Lindsey Graham, left, and Richard Blumenthal are behind EARN IT, one of several recent online safety bills for children. | Drew Angerer/Getty Images

The latest salvo in reining in tech platforms: Laws to protect children from them.

It looks like the big bipartisan push against Big Tech in the new Congress will be about protecting kids. While antitrust and privacy efforts seem to be languishing for now, several child-focused online safety bills are being introduced this session. Senate Majority Leader Chuck Schumer has reportedly signaled that passing them is a priority for him. President Joe Biden recently said the same.

And they just might pass, if this week’s Senate Judiciary Committee hearing about protecting children online is any indication. Witnesses testified about how children are harmed by online content and the platforms that help push it to a largely friendly audience of senators, some of whom authored prominent child online safety bills in previous sessions. None have become law, but the new Congress seems intent on making it happen.

For several years now, there’s been a bicameral and bipartisan consensus in Congress that something has to be done about Big Tech’s power, but not what nor how. Democrats and Republicans can’t even agree on whether Big Tech platforms moderate content too much or not enough. Now, it looks like they’ve found their cause and their victims: children.

The desire to protect kids from internet harms and abuses is stronger than ever in the 118th Congress, making it increasingly likely that at least one law that purports to do so actually gets passed. But critics say that, in practice, those bills may not help children, and may exist at the expense of free speech and privacy.

In the Tuesday hearing, Sen. Richard Blumenthal (D-CT) indicated that he is working with Sen. Lindsey Graham (R-SC) to reintroduce EARN IT, an act from the last Congress that would remove Section 230 protections from online services that didn’t follow a list of best practices. Sen. Marsha Blackburn (R-TN) said that she and Sen. Blumenthal will be reintroducing the Kids Online Safety Act, or KOSA, which would have given children under 16 tools to prevent the amplification of harmful content on social media platforms and their parents the ability to limit their kids’ usage of those platforms.

“New Congress, a new start on this,” Blackburn said.

And Sen. Blumenthal, along with Judiciary Committee chair Sen. Dick Durbin (D-IL) and Sen. Mazie Hirono (D-HI) also just reintroduced the Clean Slate for Kids Online Act, which would require that websites delete data collected from children under 13 upon their request.

This week’s hearing wasn’t the only indication that children’s safety online is a priority for the new Congress. Schumer reportedly wants a vote on children’s online protection bills this summer. And while his administration is also pushing for such a law, President Biden had a few things to say about kids and the internet in his recent State of the Union address.

“We must finally hold social media companies accountable for the experiment they are running on our children for profit,” he said. “And it’s time to pass bipartisan legislation to stop Big Tech from collecting personal data on kids and teenagers online.”

“Ban targeted advertising to children!” Biden shouted over the applause.

Sen. Ted Cruz (R-TX) is talking about this, too. In a call with reporters on Monday, the new ranking member of the Senate Commerce Committee said that while his main focus when it comes to Big Tech legislation is on stopping content moderation that he believes harms free speech, he is talking to Senate Commerce Committee Chair Sen. Maria Cantwell (D-WA) about a privacy law. There is bipartisan support for privacy laws, Cruz said, and the ones that target children are the most likely to actually get anywhere this session.

“That’s the easiest place to get bipartisan agreement,” Cruz said. “A comprehensive privacy bill is going to be a lot harder to bring together Democrats and Republicans.”

As Cruz said, when it comes to bills that are framed as protecting children online, there’s reason to be optimistic that they’ll actually pass. We have precedent: The only federal consumer internet privacy law we have is the Children’s Online Privacy Protection Act. Then there’s Section 230, which gives online platforms immunity over content posted by their users. This pivotal protection was originally part of the Communications Decency Act, which was meant to stop kids from seeing porn online. Other parts of that law were later struck down, but Section 230 remains (as does online porn).

But all this apparent support still doesn’t mean the bills are slam dunks to become law. Biden’s State of the Union comments were very close to what he said at the State of the Union address a year ago, and that didn’t seem to help EARN IT, KOSA, and Clean Slate pass.

So there’s no guarantee that those bills will fare any better in this session, but there is some new pressure for them to do so: States are now passing their own children’s protection online laws in lieu of federal action. California’s Age-Appropriate Design Code Act will take effect in 2024. The law forces online services that are likely to be accessed by people under 18 to get permission before collecting their data, and it bans them from using that data in certain ways. Basically, websites have to be designed to give users under 18 the most privacy possible. California’s legislation is modeled on a United Kingdom law with the same name. Several states are considering similar laws.

Not everyone is on board with protecting children this way, however. Internet privacy and free speech advocates have criticized KOSA and EARN IT, saying that the laws may actually do the opposite of what their supporters claim. EARN IT, opponents say, could force services to drop encryption, exposing users’ communications to law enforcement (or anyone else who can get access to them) or make platforms monitor their own users’ public and private speech. They also say it won’t be an effective tool to fight child sexual abuse material, which is its supposed purpose.

Critics of KOSA believe that the legislation would make censorship on platforms worse, and that it’s sure to be overbroad, because platforms won’t want to risk allowing anything that might get them in trouble. Also, they believe KOSA gives parents too much power over what their children (specifically, teenagers) can see and do, and might force platforms to create age verification systems that would hurt everyone’s privacy, as all users would have to submit personal information to a third party to prove their age just to use a service.

The other danger in child-targeted laws like this is that Congress will just stop there. History shows us that once children are legally protected, lawmakers will punt on extending those protections to adults. They may even punt on additional laws for children. The Communications Decency Act and the Children’s Online Privacy Protection Act passed more than 25 years ago. Technology has changed a lot since then. Laws didn’t.

A bill that restricts some of the biggest companies in the world is a hard sell for some politicians, as evidenced by the foot-dragging to pass bipartisan and bicameral antitrust and privacy bills last year. A bill that is said to protect children, on the other hand, is hard to vote against. But those bills may do more harm than good. They also give lawmakers a way to look like they’re doing something about online harm for some people without having to do the harder work of figuring out how to give those protections to everyone.

Leaked memo: Meta executive warns employees they’re still “at the whim of Apple”

15 February 2023 at 12:00
A group of people standing below the Meta infinity symbol.
People posing for a group photo outside of Meta’s headquarters in Menlo Park, California. | Liu Guanguan/China News Service via Getty Images

Though the company’s stock jumped, it’s still facing big challenges from Apple, TikTok, and younger users.

Meta had an abysmal 2022. The value of its stock fell by 65 percent year over year, it laid off 11,000 people, and employee morale has suffered.

There are signs things are turning around, though: Earlier this month, the company reported stronger-than-expected Q4 earnings and saw its stock price jump by more than 20 percent in a single day. While almost every other major tech company is continuing to struggle and has also laid off thousands from their workforces, none has seen a stock market rebound anywhere close to Meta’s.

That progress could be overstated and the company isn’t out of the woods just yet, according to an internal memo from one of the company’s top executives that Recode obtained. Meta still faces major business challenges, including Apple limiting its advertising business, TikTok’s rising popularity, and its brand sentiment with users in the US.

Meta declined to comment.

In the note, which Meta Chief Marketing Officer Alex Schultz posted on Meta’s internal employee message board, Workplace, in early February, he cautioned employees to contain their excitement. “We have to keep our eyes on the horizon and not focus on the reaction of the street and our stock price,” he wrote. “I believe in this company ... but we’re still early in this turnaround, not everything will pan out.”

Schultz wrote that Meta is still “at the whim of Apple,” referencing the new privacy feature that the iPhone maker introduced in 2021 that limited the amount of data Meta can collect about many mobile users, making it harder for the company to target ads — which is a key part of its business model. Last February, Meta said the change would cost the company $10 billion in lost revenue a year — around as much as the company is spending annually on its metaverse ambitions. Since Apple made the change, Facebook had been using AI to recoup those losses and better target ads without Apple’s help. One approach, according to the Wall Street Journal, has been “bargaining with users” to get them to agree to tracking in exchange for seeing fewer ads. These efforts are still early, though, and Schultz’s memo reflects the continued power that Apple, as the gatekeeper of the iPhone App Store, still holds over Facebook and Instagram.

The executive also tempered expectations around Reels, Meta’s TikTok clone, saying that its “monetization efficiency” — or how much money the company is making from ads on Reels — has grown “but is still very low.” Overall, Reels is “still smaller than TikTok,” Schultz wrote. Meta CEO Mark Zuckerberg said in November that the amount of time users spend on Reels is about half of the time spent on TikTok, in countries outside of China.

Zuckerberg also said in a post-earnings call post this month that there are more than 140 billion Reels plays across Facebook and Instagram each day, a more than 50 percent increase from six months ago. But advertising within Reels still doesn’t make nearly as much money as advertising within Facebook and Instagram feeds.

In terms of the overall popularity of Meta’s apps, Schultz was similarly blunt.

“We are seeing better numbers on young adults and teens in the US but we’re not satisfied, sentiment trends are better for our brands but that doesn’t mean they are good in the US and similar countries and I could go on and on,” Schultz wrote.

The memo is in line with Zuckerberg’s drumbeat of messaging in recent months: Employees need to work harder to make sure Meta is “winning” again. The company is reportedly planning another round of layoffs. In particular, Zuckerberg wants to cut layers of middle management as part of his drive for increased efficiency.

For Meta, a company that had two decades of nearly unstoppable growth that suddenly halted in the past year, the note is also a demonstration of how tenuous the company’s trajectory remains. It’s too early to call Meta’s recent stock market gains a comeback.

As Meta and the rest of the tech industry face unprecedented economic uncertainty, Meta’s leaders aren’t planning to let the company rest on its laurels. Schultz’s note makes it clear: There’s still a lot more work to do before Meta can return to its glory days.

Read the full memo below:

Hey, team, just like when I talked in our Q&A after our stock price dropped precipitously last year there’s been another big street reaction to our earnings call (and the run up to it), this time up. It’s nice to see people thinking we’ve improved our discipline and we’re not as bad they thought. I’ve been in a few groups though where I’ve seen folks get quite excited. So I want to remind you what I said last year. We’re never as bad as they think we are at times like last year’s stock crash but we’re probably never as good as they think at times like this. We’re still early in this turnaround. We still have efficiency we need to find to run this company better in the new reality, we’re still at the whim of Apple, relative Monetization Efficiency has grown on reels but it is still very low, reels have grown a lot but they are still smaller than TikTok, we are seeing better numbers on young adults and teens in the US but we’re not satisfied, sentiment trends are better for our brands but that doesn’t mean they are good in the US and similar countries and I could go on and on. We have to keep our eyes on the horizon and not focus on the reaction of the street and our stock price. I believe in this company, I am really bullish in the long term future, all the things I felt positive about last year, I feel positive about, BUT we’re still early in this turnaround, not everything will pan out, we will have a lot of highs and lows yet and we have to keep a long term focus and level head no matter what the outside noise is, positive or negative.

Stay Focused and Keep Shipping

Where will all the laid-off tech workers go?

31 January 2023 at 19:40
A Google worker sits alone inside Google’s Bay View campus in Mountain View, California, in May 2022.
The path forward for tech workers won’t be easy, but there are options. | David Paul Morris/Bloomberg via Getty Images

The bright side to all these terrible tech layoffs.

Tech layoffs have become a fact of life over the last year and especially so in the last few months, as tech firms big and small exact layoffs to reckon with their slowing growth after seeing record profits during the pandemic. What’s less certain is just where these tens of thousands of tech workers will go next.

The good news is that there are still many open jobs for these workers, not only within the tech industry but also, increasingly, outside of it. There’s also increased interest in starting new businesses. And while the layoffs will certainly contribute to some people’s decisions to leave the tech industry or go out on their own, it’s worth looking at the problems with the industry itself, from burnout to bad layoff practices, that are making it a little easier for people to choose a life after tech.

“What drew everybody to Big Tech is because they got crazy with the perks, and it was so sexy — and everybody got so intrigued by that,” said Kate Duchene, CEO of professional staffing firm RGP. “The downside is you’re laid off with an email at 3 am. Or the reason you found out you’re laid off is because your badge doesn’t work anymore.”

This is all part of a cultural about-face happening at big tech companies, which for years gobbled up high-skilled workers by wooing them with big paychecks and lavish perks. Now these companies are preaching austerity and asking their giant workforces to act like startups again. At the same time, tech giants have gone from being exciting places to work to not much different from the rest of corporate America, leading some to question just what they saw in the industry in the first place.

Though they’ve made a lot of headlines, the recent layoffs seem more like a course correction than a bubble bursting. That doesn’t mean it’s not painful. Already this year, 78,000 tech industry workers have lost their jobs, following 160,000 last year, according to Layoffs.fyi. But while the layoffs are hugely destructive to those involved, their numbers aren’t yet enough to put a real dent in the massive tech job market.

As a whole, the US tech industry, which includes companies like Google and Apple, added employees for the 25th consecutive month in December, according to data from industry association CompTIA. The number of people working in tech occupations — the association defines these as computer-related technical roles, like software developer, network engineer, data analyst — was at a record high of about 6.5 million that month, and their unemployment rate was near a record low of 1.8 percent, compared to 3.5 percent for all jobs. It’s certainly possible those numbers shifted in January, but tens of thousands of layoffs won’t move the needle much in an industry of millions.

Most people with tech occupations — 59 percent — don’t actually work in the tech industry, according to CompTIA. That figure has remained remarkably stable for the last decade. That’s because even as finance, health care, and retail companies started requiring more tech talent to help them digitize and automate their businesses, the tech industry — especially software development — grew, too. But the balance could tilt even further to non-tech industry companies in the months and years to come.

“Heading into 2023, if we see some of these shifts that are occurring right now, it would not surprise me to see that we do see a larger representation [of tech workers] outside of tech,” Tim Herbert, chief research officer at CompTIA, told Recode. He added that he doesn’t expect some huge exodus of workers from the tech industry, but given the size of tech employment, even a 1 percentage point change would be notable.

It’s important to remember that the tech industry employs all kinds of workers. While we don’t have a breakdown of what types of jobs tech companies have been getting rid of, it’s safe to say many of them are in jobs that don’t require a computer science degree, like human resources or sales. For example, while Google’s layoffs in California certainly hit people in engineering roles, it also included nearly 30 in-house massage therapists. For the employees who were laid off in recent weeks, their decision to find a new tech job, leave the tech industry, or start their own business might depend on what exactly they did in tech.

Workers with in-demand tech skill sets, namely engineers, will likely have the easiest time finding more work, wherever they decide to go. There were about 300,000 job postings for tech professionals in December, lower than its peak but roughly consistent with the past four years, according to a December 2022 report from tech hiring platform Dice. The biggest and fastest-growing industries for tech professionals are finance, manufacturing, and health care. Meanwhile, the list of biggest employers of tech talent includes big tech companies like Google and Amazon alongside corporate giants like Wells Fargo, General Motors, and Anthem Blue Cross.

“Given the scope of the downsizing in tech and the well-publicized reasons those decisions were made, we are likely to see many tech professionals think twice about taking their next role at either a tech giant or startup,” Nick Kolakowski, senior editor at Dice, told Recode.

Michael Skaff made the decision to leave the tech industry well before the current layoffs. He spent the first half of his 30-year career in a variety of IT jobs within the tech sector and the second half outside of it. He’s currently in the top tech role, CIO, at Jewish Senior Living Group, a health care management company. While he admits that the rate of technological change is much slower outside tech, he doesn’t think tech’s ethos of “move fast and break things” would be suitable in industries like health care, despite its need for technological change.

“There are ways to change within the existing flows of operations that allow for progress without disrupting or breaking something,” Skaff said. “You don’t want to break health care.”

To companies outside tech who couldn’t offer such high salaries or didn’t have the cultural draw of the Googles of the world, the present moment is a chance for them to hire the tech workers they’ve long wanted, if they can make themselves attractive enough. Those new hires still won’t come cheap, though. While compensation is still the most important thing driving tech workers to a job — it has been this way forever — the No. 2 item on that list is a newer addition, according to a recent Gartner survey shared with Recode: work-life harmonization.

“It certainly presents an opportunity for traditional employers — banks, retailers, health care companies — to tap into and maybe win back some of the employees that left them,” said Graham Waller, a VP analyst at Gartner Research.

These layoffs also present an opportunity for workers to strike out on their own. Applications to form startups last year were the second highest they’ve ever been, and tech workers are adding to that trend.

To Joe Cardillo, starting their own business was a way to make work better for themself and others. Cardillo, who had been managing marketing teams at tech startups and was over the “grind culture,” started their own management coaching firm, The Early Manager, after going through a series of “very stressful” layoffs since the start of the pandemic. So Cardillo took what they felt they did well at their former jobs; managing and teaching others to do so, and combined it with their ideas about how to build a good workplace, like giving employees more say in the conditions of their labor.

“I’m very interested in the idea of democracy at work,” Cardillo said.

That certainly feels like a far cry from the seeming brutality of recent tech layoffs, which have left many with hard feelings. Whether people will actually get better conditions or kinder treatment elsewhere remains to be seen.

We won’t know for years exactly where the workers affected by recent tech layoffs will end up. It’s possible that this is only a brief aberration in what’s otherwise a growing tech sector, or that people will eschew Big Tech to found startups that prove to be the next big thing — what many say happens during financial downturns but what might be more myth than truth. Or perhaps, every company truly is a tech company, and these layoffs put the rest of corporate America on more equal footing with tech.

David Jacobowitz had been working for tech companies pretty much his entire career, most recently in sales and marketing at TikTok, when he decided to voluntarily leave to pursue his passion: his own sugar-free chocolate business called Nebula Snacks. He’d been through his share of layoffs and knew that “loyalty is not necessarily rewarded.”

Beyond that, though, he realized that perhaps the tech industry just wasn’t for him.

“I looked at the trajectory and the lifestyle that I would have to live for the next 10 to 15 years if I wanted to climb the corporate ladder within tech and, when I really got down to it, I kind of answered the question: I don’t want to do that.”

Your favorite tech giant wants you to know it’s a startup again

30 January 2023 at 12:00
Larry Page and Sergey Brin stand back to back with arms crossed in front of yellow Google-branded servers.
Tech companies are doing their best to conjure up the good old days when they were startups. Google has even invited its founders back. | Kim Kulish/Corbis via Getty Images

Facebook, Google, and Amazon are trying to get their groove back.

When Meta’s head of people, Lori Goler, posted a memo to the company’s internal employee message board last summer asking employees to work with “increased intensity,” many workers pushed back.

In internal comments Recode reviewed, some employees took issue with the idea that they weren’t working hard enough already. Others felt the problems weren’t with the rank and file, but with management and the company’s massive size and bureaucratic structure, which some said made it hard to move quickly on daily work or to give feedback to leadership. Another complaint was simply that some Meta employees didn’t want to do more work for the same amount of money. Because many Meta employees are paid in company stock, which has declined precipitously in the past year, the workers would actually be doing more for less.

The real topic at hand was whether a tech giant can or should try to behave like a startup.

Massive technology companies like Meta used to be startups, of course. But that was decades ago when they were much smaller and more agile, and when they were making products that had infinite possibilities for profit. Now these companies are asking their employees to work with “increased intensity” without any near-term payoff — in other words, to act like eager and ambitious startup workers — but in a vastly different scenario. Meta, Alphabet, and Amazon are huge and highly profitable companies, however, contending with antitrust regulators for being too big and powerful, rather than too small and scrappy. Their employees are being asked to work harder or face layoffs not because their companies aren’t making any money, but because they’re not making it fast enough.

This kind of messaging is emerging as America’s biggest tech companies are starting to show their age. Meta, formerly known as Facebook, is old enough to vote. Alphabet, formerly Google, is in its mid-20s, and Amazon will soon enter its fourth decade of operations. At the same time, the rapid growth that has historically defined these companies has slowed. Wall Street has taken notice: The combined market caps of Meta, Google, and Amazon have declined $1.5 trillion in the last year.

As one Googler put it in an interview, “There was a time when Google was young and hungry. But we haven’t been young or hungry for quite some time.”

Leadership at these three companies is now doing its best to conjure the good old days — the scrappy days. Sundar Pichai, CEO of both Alphabet and Google, is trying to remind people that Google was once “small and scrappy,” telling workers that working hard and having fun “shouldn’t always equate to money.” The company laid off 12,000 people at the end of January. At Meta, which let 11,000 employees go in November, CEO Mark Zuckerberg has said he wants workers to “return to a scrappier culture.” Meanwhile, Amazon CEO Andy Jassy told Amazon employees this month to be “inventive, resourceful, and scrappy in this time when we’re not hiring expansively and eliminating some roles,” following massive corporate layoffs at the end of last year, with more to come.

“Any company that wants to have a lasting impact must practice disciplined prioritization and work with a high level of intensity to reach goals,” Meta told Recode in a response to requests for comment for this article. “The reports about these efforts are consistent with this focus and what we’ve already shared publicly about our operating style.”

Google and Amazon did not respond to requests for comment for this story.

The survival of these companies isn’t in question. What’s unclear is which changes they’ll need to make in order to grow and create world-changing products, as they have done in years past. Inevitably, the moves these companies make as they try to shift their businesses and culture will have huge ramifications that extend far beyond the technology industry, as tech companies tend to influence the behavior of corporate America in general.

For now, layoffs look like the biggest course correction in Silicon Valley. On one hand, getting rid of thousands of employees is a form of “right-sizing” for these companies, in which they are making amends for overhiring during the pandemic. On the other, asking remaining employees to get more done with fewer resources can be demoralizing and could drive away some of the best employees.

“I don’t think remaining a very large company and then saying, ‘We’re going into startup mode,’ is going to work,” tech historian and University of Washington professor Margaret O’Mara said. “You’re just going to have unhappy workers because they’re working really hard and they’re not seeing the upside.”

It probably doesn’t help that many tech companies are also scaling back on their most over-the-top perks. Google is cutting down on travel and recently laid off nearly 30 in-house massage therapists. Meta axed its complimentary laundry service. Across the board, there’s less free food to go around.

But Drew Pascarella, senior finance lecturer at Cornell Business School, thinks the startup messaging could ultimately have a useful effect in helping to break the negative news cycle around layoffs and creating a more positive atmosphere for remaining employees.

“They’re using this to positively evoke the yesteryear of when it was fun and cool to work for tech in the Valley,” Pascarella said. He added that the message isn’t without merit, in that these companies still are innovative to an extent. They also have subdivisions that are still designed to behave like startups.

That said, tech giants are cutting back on moonshots, those ambitious R&D projects that typically don’t make much money. Google axed a neural network effort that modeled the brains of flies, made cuts to its innovation unit, and even laid off some workers in AI, which the company has said is still a “key” investment area. Amazon appears to be scaling back development of Alexa, which captured our collective imagination by making talking to machines mainstream but was also losing gobs of cash. Amazon told Recode it laid off “under 2,000” employees in its devices division, which includes Alexa, but claims it is still investing heavily in Alexa. Meta is perhaps the odd one out since it’s doubling down on its biggest moonshot, the metaverse, but the company has axed other major projects, like its Portal video chat hardware.

A person seen overhead walking past a large sign with the Meta logo. Josh Edelson/AFP via Getty Images
A sign outside the corporate headquarters of Meta, which changed its name in 2021 from Facebook

All these cuts and layoffs allow companies to save money in the short term, and the stock markets have responded positively. But too many cuts could potentially jeopardize their growth in the future. They don’t know if a money-losing line item today might be the next Google Ads or Instagram. These changes also mark a distinct change from the companies’ startup roots, where potential growth was prioritized over profitability.

We talked to half a dozen employees at Google, Meta, and Amazon, whom we granted anonymity so as not to jeopardize their employment, as well as tech industry experts about how these companies are trying to right their ships and whether it can work. What happens next depends on how the companies execute these changes as well as how employees and investors respond — not to mention how innovative these companies can be when this is all over.

Growing pains

To some extent, tech workers have accepted certain kinds of cuts as reasonable. Opulent holiday celebrations, rampant swag, and omnipresent food were always considered a bit over the top even compared to some of the more indulgent startups. (As one Google employee put it, “Coming in from smaller shops, I thought, ‘Man, these Google people are really spoiled.”) So it was no surprise when Google restricted employee travel, including to social events or in-person events with virtual options. Few were shocked when Meta limited the number of free restaurants it offers at its main campus in Menlo Park.

There’s also no doubt that the rampant hiring during the pandemic left a bit of headcount bloat that these companies could afford to lose. Amazon nearly doubled its employee numbers to 1.5 million in the third quarter of 2022, up from 800,000 in 2019. Meta also nearly doubled its employees from 45,000 in 2019 to 87,000 in that time. Google had grown its headcount more than 50 percent since the end of 2019 to 187,000 in September 2022.

The problem, though, is that layoffs don’t necessarily save money. In conjunction with asking workers to work harder, they can also have unintended negative consequences.

“I think people are afraid in a way that I have not experienced in the tech industry in a very long time,” another Google employee said. While that can motivate people to work harder and to prove their projects are worthwhile to the company’s bottom line, the employee said it can also drive unwanted behaviors, like workers fighting “turf wars” over high-priority projects. The employee added that, in the past, teams might share code or combine feature requests when they found overlap in their work. That’s no longer the case. Instead, one team won’t wait for another or share code. They might, however, start talking about the deficiencies of the other team.

There’s also the distinct possibility that asking remaining workers to work harder and be more efficient won’t work but instead just demoralize them.

That’s how things have panned out at Google so far. For a while, the fact that the company had avoided major layoffs had been a point of pride for its workers, one that suggested they were valued employees at a well-run company. Over the holidays, workers posted memes on the company’s internal communications thanking Pichai for not laying off workers and, by extension, not being like seemingly every other tech company.

Last week’s layoffs changed things. Google employees struggled to find a consistent rationale for layoffs, as they seemed to span teams, tenures, and high performers.

“No one knows what’s stable now,” a Google software engineer told Recode after the layoffs. “Morale is low.” While layoffs might cause some people to work harder, he speculated that many others might feel demotivated and look for other work, given the breadth of the layoffs. “Their view of it is, ‘I don’t know if working hard means I keep my job. I don’t understand why the layoffs happened the way they did. My colleague over here was amazing. And they’re gone.’”

Layoffs at Meta also seemed to have had a negative impact on employees, some of whom resent the idea that they are expected to now work harder.

“There’s no way I’m staying at Meta if I’m told to work startup hours,” one Meta employee told Recode.

David Yoffie, Harvard Business School professor and longtime tech board member at companies including Intel and HTC, says that the language around working harder partly stems from Elon Musk’s high-profile push for his Twitter employees to be “extremely hardcore” and a general feeling in Silicon Valley that the “intensity which characterized the early days is gone.” It amounts to little more than rhetoric, he said.

“These companies are too big for these kinds of short-term rants to have a big impact,” Yoffie explained. “Preaching you need to work harder to 70,000 people does not work.” Even worse, such cuts can cause some of the best talent to leave, ultimately harming the company’s prospects. “Whenever companies start to go down this route, the very best employees, who are going to get hired even in a bad environment, end up moving, and that weakens the company as a whole,” he added.

John Kerry stands with one foot out of a white Google-branded car. Smith Collection/Gado/Getty Images
Former US Secretary of State John Kerry stepping out of an early Google self-driving car in 2016

But some Silicon Valley executives are energized by the cuts. For too long during tech’s boom cycle, the thinking goes, big companies hired endlessly. Now that the tech economy has tightened, it’s a good time for executives to “cut that fat,” as one former Meta manager told Recode in September. That feeling might be shared by leaders at Google, too.

“Google — like any large company — has parts where people work incredibly hard, but there’s large parts of the company where it’s just a very comfortable place to be,” said Laszlo Bock, Google’s former head of HR and co-founder of the HR tech company Humu. With the economic downturn, Bock said, there’s an opportunity for management to get rid of longtime employees who are highly paid and perceived to be a little too comfortable.

Employees and experts are more ambivalent about how these companies are now cutting moonshots. That’s largely because it can be hard to tell in the early stages of development what will be the next big thing and what’s just a waste of time and money. A former Amazon employee told Recode that there has been less discipline around cutting products that don’t actually meet customer needs, referring to how the company quickly ceased production on its Fire Phone. Another said that since Jassy became CEO in 2021, the company has been reticent to invest in or even consider moonshot ideas.

Several Google employees said that the company has long kept unprofitable projects going beyond their usefulness, and that getting rid of some of them might be for the best. Google is famous for trying unexpected new things. Some of these efforts have turned into profitable products, like Gmail, while others have helped prop up Google’s reputation for innovation. The fear is that by getting rid of these risky side projects, the company might miss the next big thing. There is also a fear that something has changed at the company, since few of these projects have panned out in recent years.

“Why isn’t it working? What is the special sauce that we used to have when we were doing Maps, and Google Docs, and Sheets and Cloud even?” one Google employee asked.

The path forward

It’s tough to figure out what’s next for Big Tech companies, since their scale makes it difficult to draw historical comparisons. Do they become Microsoft and go into something like cloud computing? Or do they fade from glory like Xerox or RCA, companies that made some of the biggest technological innovations of their time but failed to shepherd that innovative spirit into the next era?

To stay on the cutting edge, tech giants are leaning into their own visions of the future. Meta is going all in on the metaverse. Google is focusing its efforts on AI, even calling in Google’s founders to help with the mission. And Amazon’s Jassy says he’s doubling down on Amazon’s ethos of “Invent and Simplify,” but he’s also moved the goalposts on what it means to innovate to include more basic improvements.

So far, Wall Street has been receptive to these approaches, but that reception has been muted: Daniel Keum, an associate professor of management at Columbia Business School, called the reaction “not crazy but significant.” Still, Meta, Alphabet, and Amazon have a long way to go, with their stock prices roughly 50 percent down from their peak in 2021.

The experts Recode spoke to offered a variety of suggestions for how these companies could solve their problems. Many of those ideas seem abstract and hard to actually accomplish, however. Yoffie, for example, said that these tech giants focus on “reinvigorating small teams that have the flexibility to do creative and new innovations.” But that would require allowing more autonomy in these giant, bureaucratic institutions, not to mention more funding.

“You can help them get back to growth, if and only if they are able to maintain a level of innovation that would enable them to grow new businesses and to expand,” he said. Deciding where to put that money while making necessary cuts comes down to good leadership — something not easily defined.

The advice from Pascarella, the Cornell lecturer, is more quotidian. He says it’s important for companies to “stay true to core products and successes and to not relinquish market position” — something it seems they’re already doing.

Workers at an Amazon fulfillment center on Cyber Monday 2021 Michael Nagle/Bloomberg via Getty Images
Workers at an Amazon fulfillment center on Cyber Monday 2021

University of Washington’s O’Mara emphasized the need for visionary leadership at these companies. “That isn’t necessarily being like, ‘We’re gonna go back to startup days,” she said. “It’s more executive leadership that is providing a clear, exciting vision that is mobilizing and motivating people.”

Keum offered a slightly different perspective. He said that regulatory headwinds and slowing growth mean that these companies should invest in new startups — but not acquire them in their early stages — with the hope that they might lead to big growth. Microsoft’s latest investment in ChatGPT is a good example of how this could work for tech giants, he said.

That’s not exactly the same thing as Meta, Alphabet, and Amazon trying to be more like startups, of course. It might be impossible for these tech companies, which are now massive corporations, to reignite that spirit, according to Bock, the former Google HR head.

“Even with free food, even with the beanbags and lava lamps, we still felt like things could fall apart at any minute,” said Bock, who started at the company in 2006. That existential crisis, and the drive that comes with it, just doesn’t exist anymore, as the company rakes in huge profits despite the latest downturn.

In Bock’s words: “It’s hard to recreate that fear now.”

Jason Del Rey contributed reporting.

Clarification, January 31, 7:15 pm ET: This story has been updated to include Amazon’s confirmation that, despite massive layoffs in the Alexa division, it is continuing to invest in Alexa development.

Why Teslas keep catching on fire

27 January 2023 at 12:30
Glenn Harvey for Vox

EVs catch fire far less often than gas-powered cars, but firefighters still need to adapt.

When Thayer Smith, a firefighter in Austin, Texas, received the call that a Tesla was on fire, he knew that he’d need to bring backup.

It was in the early morning hours of August 12, 2021, and a driver had slammed a Model X into a traffic light on a quiet residential street in Austin before crashing into a gas pump at a nearby Shell station. The driver, a teenager who was later arrested for driving while intoxicated, managed to escape the car, but the Tesla burst into flames. As emergency responders battled the fire in the dark of night, bursts of sparks shot out of the totaled car, sending plumes of smoke up into the sky. It took tens of thousands of gallons of water, multiple fire engines, and more than 45 minutes to finally extinguish the blaze.

“People have probably seen vehicles burning on the side of the road at one point or another,” Smith, the division chief at the Austin Fire Department, recalled. “Just imagine that magnified a couple times because of all the fuel load from the battery pack itself. The fact that it won’t go out immediately just makes it a little more spectacular to watch.”

Like other Tesla fires, the fiery scene in Austin can be tied to the Model X’s high-voltage battery. In Austin, the electric vehicle ignited after a slide across the base of a traffic pole that the driver had knocked down caused the battery on the bottom of the car to rupture. At that point, the impact likely damaged one or several of the tiny cells that power the car’s battery, triggering a chain of chemical reactions that continued to light new flames. Though firefighters were able to put out the fire at the gas station, what remained of the car — little more than a burnt metal frame — reignited at a junkyard just a few hours later.

The Austin crash led to a lot of headlines, but EV fires are relatively rare. Smith said his department has seen just a handful of EV fires. While the US government doesn’t track the number of EV fires, specifically, Tesla’s reported numbers are far lower than the rate for highway fires overall, the National Fire Protection Association (NFPA) told Vox. The overwhelming majority of car fires are caused by traditional internal combustion vehicles. (This makes sense, in part because these vehicles carry highly flammable liquids like gasoline in their tanks, and, as their name implies, their engines work by igniting that fuel.)

Still, people have started associating EVs with dramatic fires for a few reasons. Videos of EV fires like the one in Austin tend to go viral, often attracting comments that condemn President Joe Biden and the electrification movement. At the same time, misleading posts about EVs spontaneously exploding, or starting fires that can’t be put out with water, have helped promote the narrative that electric vehicles are far less safe than conventional cars. The research doesn’t bear this out. Two recent Highway Loss Data Institute reports found that EVs posed no additional risk for non-crash fires, and the NFPA told Vox that from a fire safety perspective, EVs are no more dangerous than internal combustion cars.

This narrative has another nefarious side effect: It stands to distract from a more complicated EV fire problem. Although they’re relatively rare, electric car fires present a new technical and safety challenge for fire departments. These fires burn at much higher temperatures and require a lot more water to fight than conventional car fires. There also isn’t an established consensus on the best firefighting strategies for EVs, experts told Vox. Instead, there’s a hodgepodge of guidance shared among fire departments, associations that advise firefighters, and automakers. As many as half of the 1.2 million firefighters in the US might not be currently trained to combat EV fires, according to the NFPA.

“The Fire Service has had 100 years to train and to understand how to deal with internal combustion engine fires,” remarked Andrew Klock of the NFPA, which offers EV classes for firefighters. “With electric vehicles, they don’t have as much training and knowledge. They really need to be trained.”

The stakes are incredibly high. If the White House has its way, electric vehicles will go mainstream over the coming decade. An executive order signed by President Biden calls for 50 percent of new car sales to be electric by 2030, and the administration is pouring billions into building EV infrastructure and battery factories across the country on the assumption that people will buy these cars. EV fires — and misinformation about them — could stand in the way of that goal.

How an EV fire starts

An electric vehicle battery pack is made up of thousands of smaller lithium-ion cells. A single cell might look like a pouch or cylinder, and is filled with the chemical components that enable the battery to store energy: an anode, a cathode, and a liquid electrolyte. The cells are assembled into a battery pack that’s encased in extremely strong material, like titanium, and that battery pack is normally bolted to the vehicle’s undercarriage. The idea is to make the battery almost impossible to access and, ideally, to protect it during even the nastiest of collisions.

Things don’t always go as planned. When an EV battery is defective or damaged — or just internally fails — one or more lithium-ion cells can short-circuit, heating up the battery. At that point, the tiny membranes that separate the cathode and the anode melt, exposing the highly flammable liquid electrolyte. Once a fire ignites, heat can spread to even more cells, triggering a phenomenon called thermal runaway, firefighters told Vox. When this happens, flames continue igniting throughout the battery, fueling a fire that can last for hours.

The first moments of an EV fire might appear relatively calm, with only smoke emanating from underneath the vehicle. But as thermal runaway takes hold, bright orange flames can quickly engulf an entire car. And because EV batteries are packed with an incredible amount of stored energy, one of these fires can get as hot as nearly 5,000 degrees Fahrenheit. Even when the fire appears to be over, latent heat may still be spreading within the cells of the battery, creating the risk that the vehicle could ignite several days later. One firefighter compared the challenge to a trick birthday candle that reignites after blowing it out.

Because EV fires are different, EV firefighting presents new problems. Firefighters often try to suppress car fires by, essentially, suffocating them. They might use foam extinguishers filled with substances like carbon dioxide that can draw away oxygen, or use a fire blanket that’s designed to smother flames. But because EV fires aren’t fueled by oxygen from the air, this approach doesn’t work. Instead, firefighters have to use lots and lots of water to cool down the battery. This is particularly complex when EV fires occur far from a hydrant, or if a local fire department only has a limited number of engines. Saltwater, which is extremely efficient at conducting electricity, can make the situation even worse.

Michael O’Brian, a firefighter in Michigan who serves on the stored-energy committee for the International Association of Fire Chiefs, suggested that sometimes the best strategy is to simply monitor the fire and let it burn. As with all car fires, he says his priority isn’t to salvage the vehicle.

“Our fire service in general across the United States [and] in North America is understaffed and overtaxed,” O’Brian explained. “If you’re going to commit a unit to a vehicle fire for two hours, that’s complicating.”

Some EV batteries can make this problem worse. In 2021, the National Highway Traffic Safety Administration and General Motors announced an expanded recall of all the Chevy Bolts the car company had manufactured because tiny components inside some of the Bolt batteries’ cells were folded or torn. Chrysler issued a recall in 2022 after an internal investigation found that the vehicles had been involved in a dozen fires. Chrysler has yet to reveal the root cause of its battery issue and told Vox it’s still investigating. The company’s temporary solution was a software update that monitors when the car’s internal sensors determine that the battery might be at risk of igniting.

Tesla’s vehicles have their own set of problems. Tesla cars have retractable exterior door handles that only extend electronically, and only when the car has power. An emergency response guide for the 2016 Model S says that if exterior door handles aren’t working, there’s a button on the inside of the vehicle that drivers can use to open the car manually. Yet some allege that this feature makes it more difficult for emergency responders dealing with a Tesla fire. A lawsuit filed by the family of Omar Awan, a Florida doctor who died in 2019 after his Model S crashed and burst into flames, said that a police officer who arrived on the scene couldn’t open the doors from the outside.

Similarly, in a YouTube video that captured a recent Tesla battery fire in Vancouver, an owner recounts having to smash open the car’s windows because the electronics stopped working and the doors wouldn’t open. “I could feel it in my lungs, man,” he says on the recording. Tesla has also faced several other lawsuits alleging that its battery systems are dangerous. The company, which does not have a PR department, did not respond to a request for comment.

Experts Vox spoke to, including firefighters as well as fire safety officials, say that while Teslas are the most common electric cars on the road right now, EV firefighting goes far beyond any one carmaker. Perhaps the biggest challenge of all is that as EVs go mainstream, EV fires aren’t being studied as much as experts and government officials say they should be. “The unfortunate part is that we’re not really moving this as quickly as we should and updating it,” Lorie Moore-Merrell, the US fire administrator at the Federal Emergency Management Agency (FEMA), told Vox.

The national fire incident tracking system currently used by FEMA was invented in 1976 and was last updated in 2002, so it doesn’t specifically track electric vehicle fires. While the agency does plan to update the system with a new cloud platform, FEMA said it will only start building the technology later this spring, and then it will transition from the legacy system sometime in the late fall.

Firefighting in the electric era

Amid a barrage of news reports about the Model X fire in Austin last year, Tesla reached out to the city’s fire department. Michael McConnell, an emergency response technical lead at Tesla, first spoke with Smith, the division chief, on the phone and later sent him an email, which Vox obtained through a public records request, with advice on how the fire department might approach the same situation in the future.

“First of all, let’s debunk the myth of getting electrocuted. Lots of things have to go wrong in order for that to happen,” Smith said. “If the battery pack has not been compromised, then just leave it alone.”

In the long, wide-ranging message, McConnell also explained what assistance Tesla could and could not provide. He offered online training sessions but could not arrange in-person training because, McConnell explained, he had “just too many requests.” A diagram for the Model X implied there was magnesium in a part of the car that did not, in fact, contain magnesium. There was no extrication video guide for the company’s Model Y car (extrication is the firefighter term for removing someone from a totaled vehicle). It would be difficult to get a training vehicle for the Austin firefighters to practice with, McConnell added, since Tesla is a “build to order manufacturer.” Most of Tesla’s scrap vehicles are recycled at the company’s Fremont plant, he said, though a car could become available if one of Tesla’s engineering or fleet vehicles crashed.

McConnell’s long email reflects the current approach to fighting EV fires and the fact that fire departments across the country are still learning best practices. Even now, there isn’t consensus on the best approach. Some firefighters have considered using cranes to lift flaming EVs into giant tanks of water, although some automakers discourage submerging entire vehicles. Rosenbauer, a major fire engine and firefighting equipment manufacturer, has designed a new nozzle that pierces through the battery casing and squirts water directly onto the damaged cells, despite some official automaker guides that say firefighters shouldn’t try rupturing the battery. Another factor that needs to be considered, added Alfie Green, the chief of training at the Detroit Fire Department, is that there are new car models released every year, and there is particular guidance on how to disconnect different cars.

While some standards have been released, others are still being developed, and fire departments are still catching up with National Transportation Safety Board recommendations. There’s also the matter of just getting the vast number of firefighters up to speed on EVs. O’Brian, the fire chief from Michigan, told Vox that the federal government needs to take a much more active role in funding research and helping buy EVs that fire departments can practice on.

Another complication is that EV fires present different risks in different places. The New York City Fire Department (FDNY) hasn’t had to fight any electric car fires yet, but it is facing e-scooter and e-bike fires, which are on track to double compared to last year and disproportionately endanger delivery workers in the city. Batteries that lack safety certifications or are charged improperly are more likely to ignite, explains John Esposito, the FDNY’s chief of operations. In November, 43 people were injured in a Manhattan building fire that the department ultimately linked to a battery-powered micromobility device — possibly a scooter — that had been kept inside an apartment.

Small towns face unique hurdles. In Irmo, South Carolina, which is home to fewer than 12,000 people, there’s concern about getting the right equipment to deal with EV fires. While there haven’t been any high-voltage battery fires yet, Sloane Valentino, the assistant chief of Irmo’s fire department, told Vox he’s not sure whether the town has enough engines to fight a Tesla fire while also responding to other fires in the area.

“We don’t have the capacity to deal with 30,000 gallons worth of toxic runoff. Some of it’s going to turn to steam,” Valentino told Vox. “We’re kind of back to, ‘Let it burn.’ When you see the big, violent flames shooting out of the car, just kind of protect what you can — try to cool the roadway — but let the car burn.”

Engineering a safer future

While internal combustion vehicles have been around for over a century, EVs are still relatively new, which means they could become even safer as more money and research pour into the technology. Remember the melting separator in the battery that creates thermal runaway? General Motors is studying how its battery separator could contribute to improved battery safety. The Department of Energy is working on technology that could incorporate flame retardants directly into the batteries’ design. Engineers are also investigating new battery chemistries, like less-flammable electrolytes. Though research is still early, solid-state batteries, which would replace a liquid electrolyte with a solid that’s far less likely to ignite, also show promise.

“Batteries are hopefully going to be getting better over time,” said Michael Brooks, from the Center for Auto Safety. New regulation could push battery safety even further, he added.

In the meantime, fire departments are working on adjusting to this new category of fire — just another reminder that the rise of electric vehicles involves far more than simply replacing gas tanks with batteries. And firefighters will be the ones driving some of these new EVs. In May, the Los Angeles Fire Department debuted the first electric fire truck to hit the road in the US. The bright red engine is made by Rosenbauer, and it comes with a front touchscreen, a remote control tablet, two onboard batteries, and a backup diesel range extender. Other departments are now waiting for their own EV fire trucks to arrive.

Meanwhile, back at the Austin Fire Department, Smith says he has encountered at least one EV fire since the Model X accident a year and a half ago. That one didn’t involve the battery, so it was like fighting any other car fire. But in the months following the 2021 crash, the fire department did go ahead and jury-rig a new firefighting nozzle to deal specifically with EV fires. The department hasn’t heard anything more from Tesla.

Rebecca Heilweil is a reporter at Vox covering emerging technology, artificial intelligence, and the supply chain.

Are we too worried about misinformation?

16 January 2023 at 13:30
A graphic representation of a newspaper page with the words true and false acting as headlines.
Getty Images/iStockphoto

“Resist trying to make things better”: A conversation with internet security expert Alex Stamos.

I’m old enough to remember when the internet was going to be great news for everyone. Things have gotten more complex since then: We all still agree that there are lots of good things we can get from a broadband connection. But we’re also likely to blame the internet — and specifically the big tech companies that dominate it — for all kinds of problems.

And that blame-casting gets intense in the wake of major, calamitous news events, like the spectacle of the January 6 riot or its rerun in Brazil this month, both of which were seeded and organized, at least in part, on platforms like Twitter, Facebook, and Telegram. But how much culpability and power should we really assign to tech?

I think about this question all the time but am more interested in what people who actually study it think. So I called up Alex Stamos, who does this for a living: Stamos is the former head of security at Facebook who now heads up the Stanford Internet Observatory, which does deep dives into the ways people abuse the internet.

The last time I talked to Stamos, in 2019, we focused on the perils of political ads on platforms and the tricky calculus of regulating and restraining those ads. This time, we went broader, but also more nuanced: On the one hand, Stamos argues, we have overestimated the power that the likes of Russian hackers have to, say, influence elections in the US. On the other hand, he says, we’re likely overlooking the impact state actors have to influence our opinions on stuff we don’t know much about.

You can hear our entire conversation on the Recode Media podcast. The following are edited excerpts from our chat.

Peter Kafka

I want to ask you about two very different but related stories in the news: Last Sunday, people stormed government buildings in Brazil in what looked like their version of the January 6 riot. And there was an immediate discussion about what role internet platforms like Twitter and Telegram played in that incident. The next day, there was a study published in Nature that looked at the effect of Russian interference on the 2016 election, specifically on Twitter, which concluded that all the misinformation and disinformation the Russians tried to sow had essentially no impact on that election or on anyone’s views or actions. So are we collectively overestimating or underestimating the impact of misinformation and disinformation on the internet?

Alex Stamos

I think what has happened is there was a massive overestimation of the capability of mis- and disinformation to change people’s minds — of its actual persuasive power. That doesn’t mean it’s not a problem, but we have to reframe how we look at it — as less of something that is done to us and more of a supply and demand problem. We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions, that reinforces the things they want to believe about themselves and about others. And in doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.

Peter Kafka

But now we have a playbook for whenever something awful happens, whether it’s January 6 or what we saw in Brazil or things like the Christchurch shooting in New Zealand: We say, “what role did the internet play in this?” And in the case of January 6 and in Brazil, it seems pretty evident that the people who are organizing those events were using internet platforms to actually put that stuff together. And then before that, they were seeding the ground for this disaffection and promulgating the idea that elections were stolen. So can we hold both things in our head at the same time — that we’ve both overestimated the effect of Russians reinforcing our filter bubble versus state and non-state actors using the internet to make bad things happen?

Alex Stamos

I think so. What’s going on in Brazil is a lot like January 6 in that the interaction of platforms with what’s happening there is that you have kind of the broad disaffection of people who are angry about the election, which is really being driven by political actors. So for all of these things, almost all of it we’re doing to ourselves. The Brazilians are doing [it] to themselves. We have political actors who don’t really believe in democracy anymore, who believe that they can’t actually lose elections. And yes, they are using platforms to get around the traditional media and communicate with people directly. But it’s not foreign interference. And especially in the United States, direct communication with your political supporters via these platforms is First Amendment-protected.

Separately from that, in a much smaller timescale, you have the actual kind of organizational stuff that’s going on. On January 6, we have all this evidence coming out from all these people who have been arrested and their phones have been grabbed. And so you can see Telegram chats, WhatsApp chats, iMessage chats, Signal, all of these real-time communications. You see the same thing in Brazil.

And for that, I think the discussion is complicated because that is where you end up with a straight trade-off on privacy — that the fact that people can now create groups where they can privately communicate, where nobody can monitor that communication, means that they have the ability to put together what are effectively conspiracies to try to overthrow elections.

Peter Kafka

The throughline here is that after one of these events happens, we collectively say, “Hey, Twitter or Facebook or maybe Apple, you let this happen, what are you going to do to prevent it from happening again?” And sometimes the platforms say, “Well, this wasn’t our fault.” Mark Zuckerberg famously said that idea was crazy after the 2016 election.

Alex Stamos

And then [former Facebook COO Sheryl Sandberg] did that again, after January 6.

“Resist trying to make things better”

Peter Kafka

And then you see the platforms do whack-a-mole to solve the last problem.

I’m going to further complicate it because I wanted to bring the pandemic into this — where at the beginning, we asked the platforms, “what are you going to do to help make sure that people get good information about how to handle this novel disease?” And they said, “We’re not going to make these decisions. We’re not not epidemiologists. We’re going to follow the advice of the CDC and governments around the world.” And in some cases, that information was contradictory or wrong and they’ve had to backtrack. And now we’re seeing some of that play out with the release of the Twitter Files where people are saying, “I can’t believe the government asked Twitter to take down so-and-so’s tweet or account because they were telling people to go use ivermectin.”

I think the most generous way of viewing the platforms in that case — which is a view I happen to agree with — is that they were trying to do the right thing. But they’re not really built to handle a pandemic and how to handle both good information and bad information on the internet. But there’s a lot of folks who believe — I think quite sincerely — that the platforms really shouldn’t have any role moderating this at all. That if people want to say, “go ahead and try this horse dewormer, what’s the worst that could happen?” they should be allowed to do it.

So you have this whole stew of stuff where it’s unclear what role the government should have in working with the platforms, what role the platforms should have at all. So should platforms be involved in trying to stop mis- or disinformation? Or should we just say, “this is like climate change and it’s a fact of life and we’re all going to have to sort of adapt to this reality”?

Alex Stamos

The fundamental problem is that there’s a fundamental disagreement inside people’s heads — that people are inconsistent on what responsibility they believe information intermediaries should have for making society better. People generally believe that if something is against their side, that the platforms have a huge responsibility. And if something is on their side, [the platforms] should have no responsibility. It’s extremely rare to find people who are consistent in this.

As a society, we have gone through these information revolutions — the creation of the printing press created hundreds of years of religious war in Europe. Nobody’s going to say we should not have invented the printing press. But we also have to recognize that allowing people to print books created lots of conflict.

I think that the responsibility of platforms is to try to not make things worse actively — but also to resist trying to make things better. If that makes sense.

Peter Kafka

No. What does “resist trying to make things better” mean?

Alex Stamos

I think the legitimate complaint behind a bunch of the Twitter Files is that Twitter was trying too hard to make American society and world society better, to make humans better. That what Twitter and Facebook and YouTube and other companies should focus on is, “are we building products that are specifically making some of these problems worse?” That the focus should be on the active decisions they make, not on the passive carrying of other people’s speech. And so if you’re Facebook, your responsibility is — if somebody is into QAnon, you do not recommend to them, “Oh, you might want to also storm the Capitol. Here’s a recommended group or here’s a recommended event where people are storming the Capitol.”

That is an active decision by Facebook — to make a recommendation to somebody to do something. That is very different than going and hunting down every closed group where people are talking about ivermectin and other kinds of folk cures incorrectly. That if people are wrong, going and trying to make them better by hunting them down and hunting down their speech and then changing it or pushing information on them is the kind of impulse that probably makes things worse. I think that is a hard balance to get to.

Where I try to come down on this is: Be careful about your recommendation algorithms, your ranking algorithms, about product features that make things intentionally worse. But also draw the line at going out and trying to make things better.

The great example that everyone is spun up about is the Hunter Biden laptop story. Twitter and Facebook, in doing anything about that, I think overstepped, because whether the New York Post does not have journalistic ethics or whether the New York Post is being used as part of a hacking leak campaign is the New York Post’s problem. It is not Facebook’s or Twitter’s problem.

“The reality is that we have to have these kinds of trade-offs”

Peter Kafka

Something that people used to say in tech out loud, prior to 2016, was that when you make a new thing in the world, ideally you’re trying to make it so it’s good. It’s to the benefit of the world. But there are going to be trade-offs, pros and cons. You make cars, and cars do lots of great things, and we need them — and they also cause lots of deaths. And we live with that trade-off and we try to make cars safer. But we live with the idea that there’s going to be downsides to this stuff. Are you comfortable with that framework?

Alex Stamos

It’s not whether I’m comfortable or not. That’s just the reality. Any technological innovation, you’re going to have some kind of balancing act. The problem is, our political discussion of these things never takes those balances into effect. If you are super into privacy, then you have to also recognize that when you provide people private communication, that some subset of people will use that in ways that you disagree with, in ways that are illegal in ways, and sometimes in some cases that are extremely harmful. The reality is that we have to have these kinds of trade-offs.

These trade-offs have been obvious in other areas of public policy: You lower taxes, you have less revenue. You have to spend less.

Those are the kinds of trade-offs that in the tech policy world, people don’t understand as well. And certainly policymakers don’t understand as well.

Peter Kafka

Are there practical things that government can impose in the US and other places?

Alex Stamos

The government in the United States is very restricted by the First Amendment [from] pushing of the platforms to change speech. Europe is where the rubber’s really hitting the road. The Digital Services Act creates a bunch of new responsibilities for platforms. It’s not incredibly specific on this area, but that is where, from a democratic perspective, there will be the most conflict over responsibility. And then you see in Brazil and India and other democracies that are backsliding toward authoritarianism, you see much more aggressive censorship of political enemies. That is going to continue to be a real problem around the world.

Peter Kafka

Over the years, the big platforms built pretty significant apparatuses to try to moderate themselves. You were part of that work at Facebook. And we now seem to be going through a real-time experiment at Twitter, where Elon Musk has said ideologically, he doesn’t think Twitter should be moderating anything beyond actual criminal activity. And beyond that, it costs a lot of money to employ these people and Twitter can’t afford it, so he’s getting rid of basically everyone who was involved in disinformation and in moderation. What do you imagine the effect that will have?

Alex Stamos

It is open season. If you are the Russians, if you’re Iran, if you’re the People’s Republic of China, if you are a contractor working for the US Department of Defense, it is open season on Twitter. Twitter’s absolutely your best target.

Again, the quantitative evidence is that we don’t have a lot of great examples where people have made massive changes to public beliefs [because of disinformation]. I do believe there are some exceptions, though, where this is going to be really impactful on Twitter. One is on areas of discussion that are “thinly traded.”

The battle between Hillary Clinton and Donald Trump was the most discussed topic on the entire planet Earth in 2016. So no matter what [Russians] did with ads and content was nothing, absolutely nothing compared to the amount of content that was on social media about the election. It’s just a tiny, tiny, tiny drop in the ocean. One article about Donald Trump is not going to change your mind about Donald Trump. But one article about Saudi Arabia’s war [against Yemen] might be the only thing you consume on it.

The other area where I think it’s going to be really effective is in attacking individuals and trying to harass individuals. This is what we’ve seen a lot out of China. Especially if you’re a Chinese national and you leave China and you’re critical of the Chinese government, there will be massive campaigns lying about you. And I think that is what’s going to happen on Twitter — if you disagree, if you take a certain political position, you’re going to end up with hundreds or thousands of people saying you should be arrested, that you’re scum, that you should die. They’ll do things like send photos of your family without any context. They’ll do it over and over again. And this is the kind of harassment we’ve seen out of QAnon and such. And I think that Twitter is going to continue down that direction — if you take a certain political position, massive troll farms have the ability to try to drive you offline.

Gamergate every single day”

Peter Kafka

Every time I see a story pointing out that such-and-such disinformation exists on YouTube or Twitter, I think that you could write these stories in perpetuity. Twitter or YouTube or Facebook may crack down on a particular issue, but it’s never going to get out of this cycle. And I wonder if our efforts aren’t misplaced here and that we shouldn’t be spending so much time trying to point out this thing is wrong on the internet and instead doing something else. But I don’t know what the other thing is. I don’t know what we should be doing. What should we be thinking about?

Alex Stamos

I’d like to see more stories about the specific attacks against individuals. I think we’re moving into a world where effectively it is Gamergate every single day — that there are politically motivated actors who feel like it is their job to try to make people feel horrible about themselves, to drive them off the internet, to suppress their speech. And so that is less about broad persuasion and more about the use of the internet as a pitched battlefield to personally destroy people you disagree with. And so I’d like to see more discussion and profiles of the people who are under those kinds of attacks. We’re seeing this right now. [Former FDA head] Scott Gottlieb, who is on the Pfizer board, is showing up in the [Twitter Files] and he’s getting dozens and dozens of death threats.

Peter Kafka

What can someone listening to this conversation do about any of this? They’re concerned about the state of the internet, the state of the world. They don’t run anything. They don’t run Facebook. They’re not in government. Beyond checking on their own personal privacy to make sure their accounts haven’t been hacked, what can and should someone do?

Alex Stamos

A key thing everybody needs to do is to be careful with their own social media use. I have made the mistake of retweeting the thing that tickled my fancy, that fit my preconceived notions and then turned out not to be true. So I think we all have an individual responsibility — if you see something amazing or radical that makes you feel something strongly, that you ask yourself, “Is this actually true?”

And then the hard part is, if you see members of your family doing that, having a hard conversation about that with them. Because part of this is there’s good social science evidence that a lot of this is a boomer problem. Both on the left and the right, a lot of this stuff is being spread by folks who are our parents’ generation.

Peter Kafka

I wish I could say that’s a boomer problem. But I’ve got a teen and a pre-teen and I don’t think they’re necessarily more savvy about what they’re consuming on the internet than their grandparents.

Alex Stamos

Interesting.

Peter Kafka

I’m working on it.

❌
❌