‘We have to move fast’: US looks to establish rules for artificial intelligence

Raw Text

OpenAI is a research organization that created ChatGPT, an advanced AI tool. Photograph: Taidgh Barron/ZUMA Press Wire/Shutterstock

OpenAI is a research organization that created ChatGPT, an advanced AI tool. Photograph: Taidgh Barron/ZUMA Press Wire/Shutterstock

Artificial intelligence (AI)

The commerce department has requested public comment on AI accountability measures to ensure privacy and transparency

Johana Bhuiyan

Tue 11 Apr 2023 15.22 EDT

The US government is taking its first tentative steps toward establishing rules for artificial intelligence tools, as the frenzy over generative AI and chatbots reach a fever pitch.

The US commerce department on Tuesday announced it is officially requesting public comment on how to create accountability measures for AI, seeking help on how to advise US policymakers to approach the technology.

‘I didn’t give permission’: Do AI’s backers care about data law breaches? Read more

“In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy,” said Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), at a press conference at the University of Pittsburgh.

Davidson said that the NTIA is seeking feedback from the public, including from researchers, industry groups, and privacy and digital rights organizations on the development of audits and assessments of AI tools created by private industry. He also said that the NTIA looking to establish guardrails that would allow the government to determine whether AI systems perform the way companies claim they do, whether they are safe and effective, whether they have discriminatory outcomes or “reflect unacceptable levels of bias”, whether they spread or perpetuate misinformation, and whether they respect individuals’ privacy.

“We have to move fast because these AI technologies are moving very fast in some ways,” Davidson said. “We’ve had the luxury of time with some of those other technologies … this feels much more urgent.”

The Biden administration has previously introduced a “guide” around the development of AI systems in the form of a voluntary “ bill of rights ” which entail five principles that companies should consider for their products. Those include data privacy, protections against algorithmic discrimination, and transparency around when and how an automated system is being used.

The National Institute of Standards and Technology has also published an AI risk management framework, voluntary guardrails that companies can use to attempt to limit the risk of harm to the public.

In addition, Davidson said, many federal agencies are looking at the ways current rules on the books may be applied to AI.

And US lawmakers introduced more than 100 AI-related bills in 2021, he noted. “That’s a huge difference from the early days of say, social media, or cloud computing or even the internet when people really were not paying attention,” Davidson said.

That said, the federal government has been historically slow to respond to rapidly advancing technologies with national regulations, particularly in comparison with European countries. Tech companies in the US, for instance, are able to collect and share user data relatively free from federal restrictions. That’s enabled data brokers, companies that buy and sell user data, to thrive and made it harder for consumers to keep their private information that they share with tech firms out of the hands of other third parties or law enforcement.

So far, chatbots and other AI tools have been developed and released publicly largely unfettered by any federal rule or regulatory framework. It has enabled the rapid adoption of AI tools like ChatGPT by companies across industries, in spite of concerns over privacy, misinformation and a lack of transparency over how the chatbots have been trained.

skip past newsletter promotion

Sign up to TechScape

Free weekly newsletter

Alex Hern's weekly dive in to how technology is shaping our lives

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy . We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

European regulators have proposed a legal framework that would categorize AI systems by risk: unacceptable risk, high risk, limited risk and minimal risk. The passage of the 2021 Artificial Intelligence Act would position the EU to be a global leader on regulating AI, but it has faced some recent pushback from companies invested in the burgeoning chatbot industry.

Microsoft , for instance, has argued that because chatbots have more than one purpose and are used for low-risk activities, they can’t be easily categorized even though they can and have performed activities considered “high-risk” such as spreading disinformation.

Davidson said that’s why the government needs input from the public to determine what a responsible AI regulatory framework should look like.

“Good guardrails implemented carefully can actually promote innovation,” he said. “They let people know what good innovation looks like, they provide safe spaces to innovate while addressing the very real concerns that we have about harmful consequences.”

Topics

Artificial intelligence (AI)

Computing

news

Reuse this content

Single Line Text

OpenAI is a research organization that created ChatGPT, an advanced AI tool. Photograph: Taidgh Barron/ZUMA Press Wire/Shutterstock. OpenAI is a research organization that created ChatGPT, an advanced AI tool. Photograph: Taidgh Barron/ZUMA Press Wire/Shutterstock. Artificial intelligence (AI) The commerce department has requested public comment on AI accountability measures to ensure privacy and transparency. Johana Bhuiyan. Tue 11 Apr 2023 15.22 EDT. The US government is taking its first tentative steps toward establishing rules for artificial intelligence tools, as the frenzy over generative AI and chatbots reach a fever pitch. The US commerce department on Tuesday announced it is officially requesting public comment on how to create accountability measures for AI, seeking help on how to advise US policymakers to approach the technology. ‘I didn’t give permission’: Do AI’s backers care about data law breaches? Read more. “In the same way that financial audits created trust in the accuracy of financial statements for businesses, accountability mechanisms for AI can help assure that an AI system is trustworthy,” said Alan Davidson, the head of the National Telecommunications and Information Administration (NTIA), at a press conference at the University of Pittsburgh. Davidson said that the NTIA is seeking feedback from the public, including from researchers, industry groups, and privacy and digital rights organizations on the development of audits and assessments of AI tools created by private industry. He also said that the NTIA looking to establish guardrails that would allow the government to determine whether AI systems perform the way companies claim they do, whether they are safe and effective, whether they have discriminatory outcomes or “reflect unacceptable levels of bias”, whether they spread or perpetuate misinformation, and whether they respect individuals’ privacy. “We have to move fast because these AI technologies are moving very fast in some ways,” Davidson said. “We’ve had the luxury of time with some of those other technologies … this feels much more urgent.” The Biden administration has previously introduced a “guide” around the development of AI systems in the form of a voluntary “ bill of rights ” which entail five principles that companies should consider for their products. Those include data privacy, protections against algorithmic discrimination, and transparency around when and how an automated system is being used. The National Institute of Standards and Technology has also published an AI risk management framework, voluntary guardrails that companies can use to attempt to limit the risk of harm to the public. In addition, Davidson said, many federal agencies are looking at the ways current rules on the books may be applied to AI. And US lawmakers introduced more than 100 AI-related bills in 2021, he noted. “That’s a huge difference from the early days of say, social media, or cloud computing or even the internet when people really were not paying attention,” Davidson said. That said, the federal government has been historically slow to respond to rapidly advancing technologies with national regulations, particularly in comparison with European countries. Tech companies in the US, for instance, are able to collect and share user data relatively free from federal restrictions. That’s enabled data brokers, companies that buy and sell user data, to thrive and made it harder for consumers to keep their private information that they share with tech firms out of the hands of other third parties or law enforcement. So far, chatbots and other AI tools have been developed and released publicly largely unfettered by any federal rule or regulatory framework. It has enabled the rapid adoption of AI tools like ChatGPT by companies across industries, in spite of concerns over privacy, misinformation and a lack of transparency over how the chatbots have been trained. skip past newsletter promotion. Sign up to TechScape. Free weekly newsletter. Alex Hern's weekly dive in to how technology is shaping our lives. Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy . We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply. after newsletter promotion. European regulators have proposed a legal framework that would categorize AI systems by risk: unacceptable risk, high risk, limited risk and minimal risk. The passage of the 2021 Artificial Intelligence Act would position the EU to be a global leader on regulating AI, but it has faced some recent pushback from companies invested in the burgeoning chatbot industry. Microsoft , for instance, has argued that because chatbots have more than one purpose and are used for low-risk activities, they can’t be easily categorized even though they can and have performed activities considered “high-risk” such as spreading disinformation. Davidson said that’s why the government needs input from the public to determine what a responsible AI regulatory framework should look like. “Good guardrails implemented carefully can actually promote innovation,” he said. “They let people know what good innovation looks like, they provide safe spaces to innovate while addressing the very real concerns that we have about harmful consequences.” Topics. Artificial intelligence (AI) Computing. news. Reuse this content.