Will AI turn the internet into a mush of fakery?
Raw Text
View in browser  | Your newsletter preferences
By Will Knight | 09.07.23
Hello, and welcome to the era of âgreat AI competition.â This week, two timely reports point to how difficult it may be to prevent the frenzy of interest in generative artificial intelligence from spiraling into an arms race that turns the internet into a mush of fakery.
Generative AI and the Future of Information Warfare đ» đ§š
Governments around the world are rushing to embrace the algorithms that breathed some semblance of intelligence into ChatGPT, apparently enthralled by the enormous economic payoff expected from the technology.
Two new reports out this week show that nation-states are also likely rushing to adapt the same technology into weapons of misinformation, in what could become a troubling AI arms race between great powers.
Researchers at RAND, a nonprofit think tank that advises the United States government, point to evidence of a Chinese military researcher who has experience with information campaigns publicly discussing how generative AI could help such work. One research article, from January 2023, suggests using large language models such as a fine-tuned version of Googleâs BERT , a precursor to the more powerful and capable language models that power chatbots like ChatGPT.
âThereâs no evidence of it being done right now,â says William Marcellino , an AI expert and senior behavioral and social scientist at RAND, who contributed to the report. âRather someone saying, âHere's a path forward.ââ He and others at RAND are alarmed at the prospect of influence campaigns getting new scale and power thanks to generative AI. âComing up with a system to create millions of fake accounts that purport to be Taiwanese, or Americans, or Germans, that are pushing a state narrativeâI think that it's qualitatively and quantitatively different,â Marcellino says.
Online information campaigns, like the one that Russiaâs Internet Research Agency waged to undermine the 2016 US election , have been around for years. They have mostly depended on manual laborâhuman workers toiling at keyboards. But AI algorithms developed in recent years could potentially mass-produce text, imagery, and video designed to deceive or persuade, or even carry out convincing interactions with people on social media platforms. A recent project suggests that launching such a campaign could cost just a few hundred dollars .
Marcellino and his coauthors note that many countriesâthe US includedâ are almost certainly exploring the use of generative AI for their own information campaigns. And the wide accessibility of generative AI tools, including numerous open source language models anyone can obtain and modify, lowers the bar for anyone looking to launch an information campaign. âA variety of actors could use generative AI for social media manipulation, including technically sophisticated non-state actors,â they write.
A second report issued this week, by another tech-focused think tank, the Special Competitive Studies Project , also warns that generative AI could soon become a way for nations to flex on one another. It urges the US government to invest heavily in generative AI because the technology promises to boost many different industries and provide ânew military capabilities, economic prosperity, and cultural influenceâ for whichever nation masters it first.
Like the RAND report, the SCSPâs analysis also draws some gloomy conclusions. It warns that generative AIâs potential is likely to trigger an arms race to adapt the technology for use by militaries or in cyberattacks. If both are right, we are headed for an information-space arms race that may prove particularly difficult to contain.
How to avoid the nightmare scenario of the internet becoming overrun with AI bots programmed for information warfare? It requires humans to talk with one another.
The SCSP report recommends that the US âshould lead global engagement to promote transparency, foster trust, and encourage collaboration.â The RAND researchers recommend that US and Chinese diplomats discuss generative AI and the risks around the technology. âIt may be in all of our interests not to have an internet thatâs totally polluted and unbelievable,â Marcellino says. I think thatâs something we can all agree on.
Will Knight , Senior Writer ( @willknight )
Need to Know
The End of Airbnb in New York
Thousands of Airbnbs and other short-term rentals are expected to disappear from rental platforms as New York City begins enforcing tight restrictions.
Autonomous Driving Goes Into High Gear
Self-driving-car pioneer and Aurora CEO Chris Urmson insists driverless trucks wonât immediately put people out of jobsâeven as he moves full speed ahead with his companyâs self-driving software.
Indiaâs Elite Tech Schools Are a Golden Ticket With a Dark Side
The Indian Institutes of Technology are a production line for global tech CEOs, but critics say they promote a toxic, discriminatory work culture.
Ozempic and Wegovy Can Also Protect Your Heart
A new study shows that semaglutide reduces heart failure symptoms like fatigue and swelling by bringing down body weight.
For all our future-gazing tech coverage, visit WIRED Business .
GET WIRED
Get WIRED for just $29.99 $5. That includes subscriber-only content like Steven Levy's Plaintext column, plus free stickers! Subscribe now.
ADVERTISEMENT
So, This Happened
BYD, the company that bested Tesla in China, is racing to conquer the rest of the world. (Rest of World)
Just a year after ChatGPT kicked off a frenzy of investment in generative AI, some startups who hoped to ride the wave are struggling . ( The Wall Street Journal )
Robin Li, the CEO of Baidu, said at a recent event that 70 different large language models are being developed in China, highlighting a frenzied generative AI boom . ( Reuters )
Hereâs an interesting look at how US colleges are struggling to make sense of AIâs newfound ability to either help studentsâor do their homework for them. ( The New York Times )
The moon is at the center of an all-new space race , but itâs China, not Russia, that the US is racing with this time. ( The Wall Street Journal )
Until Next Time
Ilya Sutskever, Sam Altman, Mira Murati, and Greg Brockman, of OpenAI
If you havenât already, take a moment to read WIREDâs latest cover story, What OpenAI Really Wants , which chronicles the creation of the AI behind the worldâs most famous program, ChatGPT .
The article captures the Beatlemania-like frenzy around OpenAI and its founders this year, with CEO Sam Altman racing between meetings with world leaders, business tycoons, and adoring fans. But what also struck me is the contradiction between the companyâs stated mission and where things seem to be headed. OpenAIâs leaders are clearly committed to building AI in a way that protects and enhances humans. But one wonders if any single company should have that powerâand responsibility.
See you next week!
ADVERTISEMENT
Was this newsletter forwarded to you? Sign up here .
Have questions or comments?  Reply to this email .
This email was sent to you by WIRED. To ensure delivery to your inbox (not bulk or junk folders), please add our email address, wired@newsletters.wired.com , to your address book.
View our Privacy Policy
Unsubscribe  or manage your newsletter preferences
Copyright © Conde Nast 2023. One World Trade Center, New York, NY 10007. All rights reserved.
Single Line Text
View in browser  | Your newsletter preferences. By Will Knight | 09.07.23. Hello, and welcome to the era of âgreat AI competition.â This week, two timely reports point to how difficult it may be to prevent the frenzy of interest in generative artificial intelligence from spiraling into an arms race that turns the internet into a mush of fakery. Generative AI and the Future of Information Warfare đ» đ§š. Governments around the world are rushing to embrace the algorithms that breathed some semblance of intelligence into ChatGPT, apparently enthralled by the enormous economic payoff expected from the technology. Two new reports out this week show that nation-states are also likely rushing to adapt the same technology into weapons of misinformation, in what could become a troubling AI arms race between great powers. Researchers at RAND, a nonprofit think tank that advises the United States government, point to evidence of a Chinese military researcher who has experience with information campaigns publicly discussing how generative AI could help such work. One research article, from January 2023, suggests using large language models such as a fine-tuned version of Googleâs BERT , a precursor to the more powerful and capable language models that power chatbots like ChatGPT. âThereâs no evidence of it being done right now,â says William Marcellino , an AI expert and senior behavioral and social scientist at RAND, who contributed to the report. âRather someone saying, âHere's a path forward.ââ He and others at RAND are alarmed at the prospect of influence campaigns getting new scale and power thanks to generative AI. âComing up with a system to create millions of fake accounts that purport to be Taiwanese, or Americans, or Germans, that are pushing a state narrativeâI think that it's qualitatively and quantitatively different,â Marcellino says. Online information campaigns, like the one that Russiaâs Internet Research Agency waged to undermine the 2016 US election , have been around for years. They have mostly depended on manual laborâhuman workers toiling at keyboards. But AI algorithms developed in recent years could potentially mass-produce text, imagery, and video designed to deceive or persuade, or even carry out convincing interactions with people on social media platforms. A recent project suggests that launching such a campaign could cost just a few hundred dollars . Marcellino and his coauthors note that many countriesâthe US includedâ are almost certainly exploring the use of generative AI for their own information campaigns. And the wide accessibility of generative AI tools, including numerous open source language models anyone can obtain and modify, lowers the bar for anyone looking to launch an information campaign. âA variety of actors could use generative AI for social media manipulation, including technically sophisticated non-state actors,â they write. A second report issued this week, by another tech-focused think tank, the Special Competitive Studies Project , also warns that generative AI could soon become a way for nations to flex on one another. It urges the US government to invest heavily in generative AI because the technology promises to boost many different industries and provide ânew military capabilities, economic prosperity, and cultural influenceâ for whichever nation masters it first. Like the RAND report, the SCSPâs analysis also draws some gloomy conclusions. It warns that generative AIâs potential is likely to trigger an arms race to adapt the technology for use by militaries or in cyberattacks. If both are right, we are headed for an information-space arms race that may prove particularly difficult to contain. How to avoid the nightmare scenario of the internet becoming overrun with AI bots programmed for information warfare? It requires humans to talk with one another. The SCSP report recommends that the US âshould lead global engagement to promote transparency, foster trust, and encourage collaboration.â The RAND researchers recommend that US and Chinese diplomats discuss generative AI and the risks around the technology. âIt may be in all of our interests not to have an internet thatâs totally polluted and unbelievable,â Marcellino says. I think thatâs something we can all agree on. Will Knight , Senior Writer ( @willknight ) Need to Know. The End of Airbnb in New York. Thousands of Airbnbs and other short-term rentals are expected to disappear from rental platforms as New York City begins enforcing tight restrictions. Autonomous Driving Goes Into High Gear. Self-driving-car pioneer and Aurora CEO Chris Urmson insists driverless trucks wonât immediately put people out of jobsâeven as he moves full speed ahead with his companyâs self-driving software. Indiaâs Elite Tech Schools Are a Golden Ticket With a Dark Side. The Indian Institutes of Technology are a production line for global tech CEOs, but critics say they promote a toxic, discriminatory work culture. Ozempic and Wegovy Can Also Protect Your Heart. A new study shows that semaglutide reduces heart failure symptoms like fatigue and swelling by bringing down body weight. For all our future-gazing tech coverage, visit WIRED Business . GET WIRED. Get WIRED for just $29.99 $5. That includes subscriber-only content like Steven Levy's Plaintext column, plus free stickers! Subscribe now. ADVERTISEMENT. So, This Happened. BYD, the company that bested Tesla in China, is racing to conquer the rest of the world. (Rest of World) Just a year after ChatGPT kicked off a frenzy of investment in generative AI, some startups who hoped to ride the wave are struggling . ( The Wall Street Journal ) Robin Li, the CEO of Baidu, said at a recent event that 70 different large language models are being developed in China, highlighting a frenzied generative AI boom . ( Reuters ) Hereâs an interesting look at how US colleges are struggling to make sense of AIâs newfound ability to either help studentsâor do their homework for them. ( The New York Times ) The moon is at the center of an all-new space race , but itâs China, not Russia, that the US is racing with this time. ( The Wall Street Journal ) Until Next Time. Ilya Sutskever, Sam Altman, Mira Murati, and Greg Brockman, of OpenAI. If you havenât already, take a moment to read WIREDâs latest cover story, What OpenAI Really Wants , which chronicles the creation of the AI behind the worldâs most famous program, ChatGPT . The article captures the Beatlemania-like frenzy around OpenAI and its founders this year, with CEO Sam Altman racing between meetings with world leaders, business tycoons, and adoring fans. But what also struck me is the contradiction between the companyâs stated mission and where things seem to be headed. OpenAIâs leaders are clearly committed to building AI in a way that protects and enhances humans. But one wonders if any single company should have that powerâand responsibility. See you next week! ADVERTISEMENT. Was this newsletter forwarded to you? Sign up here . Have questions or comments?  Reply to this email . This email was sent to you by WIRED. To ensure delivery to your inbox (not bulk or junk folders), please add our email address, wired@newsletters.wired.com , to your address book. View our Privacy Policy. Unsubscribe  or manage your newsletter preferences. Copyright © Conde Nast 2023. One World Trade Center, New York, NY 10007. All rights reserved.