Our AI writing assistant, WriteUp, can assist you in easily writing any text. Click here to experience its capabilities.

71% of organizations struggling to keep up with new AI risks, report finds

Summary

A new report from MIT Sloan Management Review and Boston Consulting Group found that 71% of organizations are struggling to keep up with the risks of using AI tools and should implement Responsible AI (RAI) programs. The report surveyed 1,240 respondents from 87 countries and found that although most organizations rely on third-party AI tools, some don't evaluate their risks at all. To mitigate risk, organizations should properly evaluate third-party tools, prepare for emerging regulations, engage CEOs in RAI efforts, and move quickly to mature RAI programs. As AI risks become more apparent, the White House and state and federal officials are considering regulations to track and monitor the use of automated tools in the workplace.

Q&As

What percentage of organizations are struggling to keep up with the risks of using AI tools?
70% of organizations are struggling to keep up with the risks of using AI tools.

What are some potential risks resulting from the use of AI tools?
Potential risks resulting from the use of AI tools include financial loss, reputational damage, the loss of customer trust, regulatory penalties, compliance challenges, litigation and more.

What methods are used by the most well-prepared organizations to evaluate third-party AI tools and mitigate risks?
The most well-prepared organizations use a variety of approaches to evaluate third-party tools and mitigate risk, including vendor pre-certification and audits, internal product-level reviews, contractual language that mandates RAI principles and adherence to AI-related regulatory requirements and industry standards.

What role does CEO engagement play in RAI efforts?
CEO engagement in RAI conversations appears to be key, as organizations with a CEO who takes a ā€œhands-on roleā€ reported 58% more business benefits than organizations with a less hands-on CEO.

What regulations are being considered to track and monitor the use of automated tools in the workplace?
Regulations being considered to track and monitor the use of automated tools in the workplace include those from the White House to evaluate technology used to ā€œsurveil, monitor, evaluate and manageā€ workers, as well as more than 160 bills or regulations pending in 34 state legislatures related to AI. For the U.S. Equal Employment Opportunity Commission (EEOC), employment discrimination is a key risk to consider, especially when using AI-based platforms involved in hiring and firing decisions.

AI Comments

šŸ‘ This article outlines the risks and potential benefits of using AI tools in the workplace, as well as the importance of investing in responsible AI programs. It provides a comprehensive overview of the current landscape and offers practical advice on how to best manage these risks.

šŸ‘Ž The article fails to address the ethical implications of AI tools and does not provide any concrete solutions for organizations struggling to keep up with the risks. Furthermore, it fails to mention potential solutions for addressing future regulations related to AI.

AI Discussion

Me: It's about how 71% of organizations are struggling to keep up with the risks of using artificial intelligence (AI) tools. Apparently, 55% of all AI-related failures are due to third-party AI tools, which could lead to financial loss, reputational damage, the loss of customer trust, regulatory penalties, compliance challenges, litigation and more.

Friend: Wow that's pretty serious. What do the authors suggest organizations do to address this?

Me: The authors suggest that organizations should invest in and scale Responsible AI (RAI) programs to address the new risks of AI. They also recommend that organizations evaluate third-party tools, prepare for emerging regulations, engage CEOs in RAI efforts and move quickly to mature RAI programs. In addition, the White House has announced plans to evaluate technology used to ā€œsurveil, monitor, evaluate and manageā€ workers, and cities and states are considering legislation that could regulate automated employment decision tools and inform job seekers of their use.

Action items

Technical terms

AI (Artificial Intelligence)
AI is a type of computer technology that is designed to simulate human intelligence and behavior. It is used to create systems that can think, learn, and act like humans.
RAI (Responsible AI)
RAI is a set of principles and practices that organizations use to ensure that their AI systems are ethical, responsible, and compliant with applicable laws and regulations.
Generative AI
Generative AI is a type of AI technology that is used to generate new data or content from existing data.
Pre-Certification
Pre-certification is the process of verifying that a product or service meets certain standards or requirements before it is released to the public.
Audit
An audit is an independent review of an organizationā€™s financial records and operations to ensure accuracy and compliance with applicable laws and regulations.
EEOC (Equal Employment Opportunity Commission)
The EEOC is a federal agency that enforces laws prohibiting employment discrimination.

Similar articles

0.9091104 Generative AI risks loom as businesses increase investments

0.8994309 Nearly 75% of small businesses concerned AI development and adoption is outpacing regulation

0.89893013 US workers voice pessimism about AI and employment

0.8980111 Workers at risk of AI radically changing their jobs aren't too worried about it

0.8970226 AI adoption hinges on reskilling, IBM research finds

šŸ—³ļø Do you like the summary? Please join our survey and vote on new features!