Our AI writing assistant, WriteUp, can assist you in easily writing any text. Click here to experience its capabilities.

Big Tech was moving cautiously on AI. Then came ChatGPT.

Summary

Big Tech companies like Facebook and Google have been cautious with the development of AI technology in the past, but the sudden success of OpenAI's ChatGPT has prompted pressure to move faster. This has resulted in internal memos being shared at Meta to speed up the AI approval process, and Google proposing a “green lane” to shorten the process of assessing and mitigating potential harms. Many top talent have been jumping ship to smaller start-ups which have been quicker to release their models to the public. The technology underlying ChatGPT is not necessarily better than that of the tech giants, but it has been released to the public, giving it an advantage. This has led to questions about Google's search interface and whether they can compete with ChatGPT. Google is trying to move faster, but is struggling with the system of checks and balances for vetting the ethical implications of cutting-edge AI, and with their own internal critics and ethicists.

Q&As

What is ChatGPT, and why has it been so successful?
ChatGPT is a chatbot that has become a phenomenon with more than a million users in its first five days. It is successful because it will converse about a variety of topics, including religion, and it is quickly going mainstream now that Microsoft is working to incorporate it into its popular office software and selling access to the tool to other businesses.

How have big tech companies reacted to ChatGPT's popularity?
Big tech companies have reacted to ChatGPT's popularity by moving faster and potentially sweeping safety concerns aside. At Meta, employees have recently shared internal memos urging the company to speed up its AI approval process to take advantage of the latest technology. Google has issued a “code red” around launching AI products and proposed a “green lane” to shorten the process of assessing and mitigating potential harms.

What is generative AI and how have big tech companies used it?
Generative AI is a new wave of software that creates works of its own by drawing on patterns they’ve identified in vast troves of existing, human-created content. Big tech companies have used generative AI to improve their massive existing business models, such as using AI to improve Google search.

What has been the reaction from AI ethicists to the increased speed of AI development?
AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.

How have Google and other large companies responded to the release of ChatGPT and other AI tools?
Google and other large companies have responded to the release of ChatGPT and other AI tools by giving consumers access to a limited version of LaMDA through its app AI Test Kitchen, but not yet releasing it fully to the general public. Google has also released its AI principles in 2018, after facing employee protest over Project Maven, a contract to provide computer vision for Pentagon drones, and consumer backlash over a demo for Duplex, an AI system that would call restaurants and make a reservation without disclosing it was a bot. Microsoft is working with OpenAI to build in extra safety mitigations when it uses AI tools like DALLE-2 in its products.

AI Comments

👍 This article does an excellent job of exploring the complex ethical implications of new AI technologies and the potential harms that could arise from their unchecked use. It is great to see technology giants like Google, Microsoft and Meta take steps to ensure that AI is used responsibly and ethically.

👎 This article fails to address the potential benefits of AI technology and the potential positive uses of these new AI tools. It also fails to provide any concrete steps that tech giants should take to ensure that AI is used responsibly and ethically.

AI Discussion

Me: It talks about how the emergence of ChatGPT and other generative AI tools is prompting Big Tech companies like Google and Meta to move faster, potentially sweeping safety concerns aside. It also mentions how some top AI talent has left major companies to join smaller start-ups, and how the pressure to release AI tools quickly is creating a risk of exposing people to potential harms.

Friend: Wow, that's really interesting. It kind of makes me worried, because it seems like the companies are just so focused on getting the technology out quickly that they're not taking the time to consider the potential harms.

Me: Yeah, definitely. It's a bit concerning. But it's also great that smaller start-ups are leading the way in AI innovation. It's important that these companies are open and transparent about their technology, and that they take the time to consider the potential risks and ethical implications.

Action items

Technical terms

AI (Artificial Intelligence)
AI is a branch of computer science that focuses on creating machines that can think and act like humans.
ChatGPT
ChatGPT is a chatbot developed by OpenAI that can converse with users about a variety of topics.
Blenderbot
Blenderbot is a chatbot developed by Meta that was released three months before ChatGPT.
Meta
Meta is the parent company of Facebook.
Collective[i]
Collective[i] is an AI consulting company.
DALL-E 2
DALL-E 2 is a text-to-image tool developed by OpenAI.
Stable Diffusion
Stable Diffusion is a text-to-image tool developed by Stable Diffusion.
Generative AI
Generative AI is a type of software that creates works of its own by drawing on patterns it has identified in existing, human-created content.
Tay
Tay is a chatbot developed by Microsoft that was taken down in less than a day in 2016 after trolls prompted the bot to make offensive comments.
Galactica
Galactica is an AI tool developed by Meta that was pulled down after three days due to criticism over its inaccurate and biased summaries of scientific research.
LaMDA
LaMDA is a language model developed by Google.
Project Maven
Project Maven is a contract to provide computer vision for Pentagon drones that Google faced employee protest over.
Duplex
Duplex is an AI system developed by Google that would call restaurants and make a reservation without disclosing it was a bot.
TensorFlow
TensorFlow is a machine learning software open-sourced by Google in 2015.
Transformers
Transformers is a piece of software architecture developed by Google that made the current wave of generative AI possible.
Hellscape
Hellscape is a term used to describe a chaotic or unpleasant environment.
Character.AI
Character.AI is a start-up founded by Noam Shazeer that allows anyone to generate chatbots based on short descriptions of real people or imaginary figures.
Cohere
Cohere is a Toronto-based start-up building large language models that can be customized to help businesses.
Adept
Adept is a start-up founded by former Google AI researchers that is building large language models.
Inflection.AI
Inflection.

Similar articles

0.9260694 The Company Behind the Next-Generation AI That’s About to Go Viral

0.92504185 A fake news frenzy: why ChatGPT could be disastrous for truth in journalism

0.924286 Researchers Poke Holes in Safety Controls of ChatGPT and Other Chatbots

0.9237854 6 Big Problems With OpenAI's ChatGPT

0.91991466 OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit

🗳️ Do you like the summary? Please join our survey and vote on new features!