Will AI turn the internet into a mush of fakery?

Raw Text

View in browser   |  Your newsletter preferences

By Will Knight | 09.07.23

Hello, and welcome to the era of “great AI competition.” This week, two timely reports point to how difficult it may be to prevent the frenzy of interest in generative artificial intelligence from spiraling into an arms race that turns the internet into a mush of fakery.

Generative AI and the Future of Information Warfare đŸ’» 🧹

Governments around the world are rushing to embrace the algorithms that breathed some semblance of intelligence into ChatGPT, apparently enthralled by the enormous economic payoff expected from the technology.

Two new reports out this week show that nation-states are also likely rushing to adapt the same technology into weapons of misinformation, in what could become a troubling AI arms race between great powers.

Researchers at RAND, a nonprofit think tank that advises the United States government, point to evidence of a Chinese military researcher who has experience with information campaigns publicly discussing how generative AI could help such work. One research article, from January 2023, suggests using large language models such as a fine-tuned version of Google’s BERT , a precursor to the more powerful and capable language models that power chatbots like ChatGPT.

“There’s no evidence of it being done right now,” says William Marcellino , an AI expert and senior behavioral and social scientist at RAND, who contributed to the report. “Rather someone saying, ‘Here's a path forward.’” He and others at RAND are alarmed at the prospect of influence campaigns getting new scale and power thanks to generative AI. “Coming up with a system to create millions of fake accounts that purport to be Taiwanese, or Americans, or Germans, that are pushing a state narrative—I think that it's qualitatively and quantitatively different,” Marcellino says.

Online information campaigns, like the one that Russia’s Internet Research Agency waged to undermine the 2016 US election , have been around for years. They have mostly depended on manual labor—human workers toiling at keyboards. But AI algorithms developed in recent years could potentially mass-produce text, imagery, and video designed to deceive or persuade, or even carry out convincing interactions with people on social media platforms. A recent project suggests that launching such a campaign could cost just a few hundred dollars .

Marcellino and his coauthors note that many countries—the US included— are almost certainly exploring the use of generative AI for their own information campaigns. And the wide accessibility of generative AI tools, including numerous open source language models anyone can obtain and modify, lowers the bar for anyone looking to launch an information campaign. “A variety of actors could use generative AI for social media manipulation, including technically sophisticated non-state actors,” they write.

A second report issued this week, by another tech-focused think tank, the Special Competitive Studies Project , also warns that generative AI could soon become a way for nations to flex on one another. It urges the US government to invest heavily in generative AI because the technology promises to boost many different industries and provide “new military capabilities, economic prosperity, and cultural influence” for whichever nation masters it first.

Like the RAND report, the SCSP’s analysis also draws some gloomy conclusions. It warns that generative AI’s potential is likely to trigger an arms race to adapt the technology for use by militaries or in cyberattacks. If both are right, we are headed for an information-space arms race that may prove particularly difficult to contain.

How to avoid the nightmare scenario of the internet becoming overrun with AI bots programmed for information warfare? It requires humans to talk with one another.

The SCSP report recommends that the US “should lead global engagement to promote transparency, foster trust, and encourage collaboration.” The RAND researchers recommend that US and Chinese diplomats discuss generative AI and the risks around the technology. “It may be in all of our interests not to have an internet that’s totally polluted and unbelievable,” Marcellino says. I think that’s something we can all agree on.

Will Knight , Senior Writer ( @willknight )

Need to Know

The End of Airbnb in New York

Thousands of Airbnbs and other short-term rentals are expected to disappear from rental platforms as New York City begins enforcing tight restrictions.

Autonomous Driving Goes Into High Gear

Self-driving-car pioneer and Aurora CEO Chris Urmson insists driverless trucks won’t immediately put people out of jobs—even as he moves full speed ahead with his company’s self-driving software.

India’s Elite Tech Schools Are a Golden Ticket With a Dark Side

The Indian Institutes of Technology are a production line for global tech CEOs, but critics say they promote a toxic, discriminatory work culture.

Ozempic and Wegovy Can Also Protect Your Heart

A new study shows that semaglutide reduces heart failure symptoms like fatigue and swelling by bringing down body weight.

For all our future-gazing tech coverage, visit  WIRED Business .

GET WIRED

Get WIRED for just $29.99 $5. That includes subscriber-only content like Steven Levy's Plaintext column, plus free stickers! Subscribe now.

ADVERTISEMENT

So, This Happened

BYD, the company that bested Tesla in China, is racing to conquer the rest of the world. (Rest of World)

Just a year after ChatGPT kicked off a frenzy of investment in generative AI, some startups who hoped to ride the wave are struggling . ( The Wall Street Journal )

Robin Li, the CEO of Baidu, said at a recent event that 70 different large language models are being developed in China, highlighting a frenzied generative AI boom . ( Reuters )

Here’s an interesting look at how US colleges are struggling to make sense of AI’s newfound ability to either help students—or do their homework for them. ( The New York Times )

The moon is at the center of an all-new space race , but it’s China, not Russia, that the US is racing with this time. ( The Wall Street Journal )

Until Next Time

Ilya Sutskever, Sam Altman, Mira Murati, and Greg Brockman, of OpenAI

If you haven’t already, take a moment to read WIRED’s latest cover story, What OpenAI Really Wants , which chronicles the creation of the AI behind the world’s most famous program, ChatGPT .

The article captures the Beatlemania-like frenzy around OpenAI and its founders this year, with CEO Sam Altman racing between meetings with world leaders, business tycoons, and adoring fans. But what also struck me is the contradiction between the company’s stated mission and where things seem to be headed. OpenAI’s leaders are clearly committed to building AI in a way that protects and enhances humans. But one wonders if any single company should have that power—and responsibility.

See you next week!

ADVERTISEMENT

Was this newsletter forwarded to you? Sign up here .

Have questions or comments?   Reply to this email .

This email was sent to you by WIRED. To ensure delivery to your inbox (not bulk or junk folders), please add our email address, wired@newsletters.wired.com , to your address book.

View our Privacy Policy

Unsubscribe  or manage your newsletter preferences

Copyright © Conde Nast 2023. One World Trade Center, New York, NY 10007. All rights reserved.

Single Line Text

View in browser   |  Your newsletter preferences. By Will Knight | 09.07.23. Hello, and welcome to the era of “great AI competition.” This week, two timely reports point to how difficult it may be to prevent the frenzy of interest in generative artificial intelligence from spiraling into an arms race that turns the internet into a mush of fakery. Generative AI and the Future of Information Warfare đŸ’» 🧹. Governments around the world are rushing to embrace the algorithms that breathed some semblance of intelligence into ChatGPT, apparently enthralled by the enormous economic payoff expected from the technology. Two new reports out this week show that nation-states are also likely rushing to adapt the same technology into weapons of misinformation, in what could become a troubling AI arms race between great powers. Researchers at RAND, a nonprofit think tank that advises the United States government, point to evidence of a Chinese military researcher who has experience with information campaigns publicly discussing how generative AI could help such work. One research article, from January 2023, suggests using large language models such as a fine-tuned version of Google’s BERT , a precursor to the more powerful and capable language models that power chatbots like ChatGPT. “There’s no evidence of it being done right now,” says William Marcellino , an AI expert and senior behavioral and social scientist at RAND, who contributed to the report. “Rather someone saying, ‘Here's a path forward.’” He and others at RAND are alarmed at the prospect of influence campaigns getting new scale and power thanks to generative AI. “Coming up with a system to create millions of fake accounts that purport to be Taiwanese, or Americans, or Germans, that are pushing a state narrative—I think that it's qualitatively and quantitatively different,” Marcellino says. Online information campaigns, like the one that Russia’s Internet Research Agency waged to undermine the 2016 US election , have been around for years. They have mostly depended on manual labor—human workers toiling at keyboards. But AI algorithms developed in recent years could potentially mass-produce text, imagery, and video designed to deceive or persuade, or even carry out convincing interactions with people on social media platforms. A recent project suggests that launching such a campaign could cost just a few hundred dollars . Marcellino and his coauthors note that many countries—the US included— are almost certainly exploring the use of generative AI for their own information campaigns. And the wide accessibility of generative AI tools, including numerous open source language models anyone can obtain and modify, lowers the bar for anyone looking to launch an information campaign. “A variety of actors could use generative AI for social media manipulation, including technically sophisticated non-state actors,” they write. A second report issued this week, by another tech-focused think tank, the Special Competitive Studies Project , also warns that generative AI could soon become a way for nations to flex on one another. It urges the US government to invest heavily in generative AI because the technology promises to boost many different industries and provide “new military capabilities, economic prosperity, and cultural influence” for whichever nation masters it first. Like the RAND report, the SCSP’s analysis also draws some gloomy conclusions. It warns that generative AI’s potential is likely to trigger an arms race to adapt the technology for use by militaries or in cyberattacks. If both are right, we are headed for an information-space arms race that may prove particularly difficult to contain. How to avoid the nightmare scenario of the internet becoming overrun with AI bots programmed for information warfare? It requires humans to talk with one another. The SCSP report recommends that the US “should lead global engagement to promote transparency, foster trust, and encourage collaboration.” The RAND researchers recommend that US and Chinese diplomats discuss generative AI and the risks around the technology. “It may be in all of our interests not to have an internet that’s totally polluted and unbelievable,” Marcellino says. I think that’s something we can all agree on. Will Knight , Senior Writer ( @willknight ) Need to Know. The End of Airbnb in New York. Thousands of Airbnbs and other short-term rentals are expected to disappear from rental platforms as New York City begins enforcing tight restrictions. Autonomous Driving Goes Into High Gear. Self-driving-car pioneer and Aurora CEO Chris Urmson insists driverless trucks won’t immediately put people out of jobs—even as he moves full speed ahead with his company’s self-driving software. India’s Elite Tech Schools Are a Golden Ticket With a Dark Side. The Indian Institutes of Technology are a production line for global tech CEOs, but critics say they promote a toxic, discriminatory work culture. Ozempic and Wegovy Can Also Protect Your Heart. A new study shows that semaglutide reduces heart failure symptoms like fatigue and swelling by bringing down body weight. For all our future-gazing tech coverage, visit  WIRED Business . GET WIRED. Get WIRED for just $29.99 $5. That includes subscriber-only content like Steven Levy's Plaintext column, plus free stickers! Subscribe now. ADVERTISEMENT. So, This Happened. BYD, the company that bested Tesla in China, is racing to conquer the rest of the world. (Rest of World) Just a year after ChatGPT kicked off a frenzy of investment in generative AI, some startups who hoped to ride the wave are struggling . ( The Wall Street Journal ) Robin Li, the CEO of Baidu, said at a recent event that 70 different large language models are being developed in China, highlighting a frenzied generative AI boom . ( Reuters ) Here’s an interesting look at how US colleges are struggling to make sense of AI’s newfound ability to either help students—or do their homework for them. ( The New York Times ) The moon is at the center of an all-new space race , but it’s China, not Russia, that the US is racing with this time. ( The Wall Street Journal ) Until Next Time. Ilya Sutskever, Sam Altman, Mira Murati, and Greg Brockman, of OpenAI. If you haven’t already, take a moment to read WIRED’s latest cover story, What OpenAI Really Wants , which chronicles the creation of the AI behind the world’s most famous program, ChatGPT . The article captures the Beatlemania-like frenzy around OpenAI and its founders this year, with CEO Sam Altman racing between meetings with world leaders, business tycoons, and adoring fans. But what also struck me is the contradiction between the company’s stated mission and where things seem to be headed. OpenAI’s leaders are clearly committed to building AI in a way that protects and enhances humans. But one wonders if any single company should have that power—and responsibility. See you next week! ADVERTISEMENT. Was this newsletter forwarded to you? Sign up here . Have questions or comments?   Reply to this email . This email was sent to you by WIRED. To ensure delivery to your inbox (not bulk or junk folders), please add our email address, wired@newsletters.wired.com , to your address book. View our Privacy Policy. Unsubscribe  or manage your newsletter preferences. Copyright © Conde Nast 2023. One World Trade Center, New York, NY 10007. All rights reserved.