The AI Hype Cycle Is Distracting Companies
Raw Text
Navigation Menu
Subscribe
Sign In
Account Menu Account Menu Hi, Â Guest
Search Menu
Close menu
CLEAR
SUGGESTED TOPICS
Explore HBR
Diversity
Latest
The Magazine
Ascend
Podcasts
Video
Store
Webinars
Newsletters
Popular Topics
Managing Yourself
Leadership
Strategy
Managing Teams
Gender
Innovation
Work-life Balance
All Topics
For Subscribers
The Big Idea
Data & Visuals
Reading Lists
Case Selections
HBR Learning
Subscribe
My Account
My Library
Topic Feeds
Orders
Account Settings
Email Preferences
Log Out
Sign In
Subscribe
Diversity
Latest
Podcasts
Video
The Magazine
Ascend
Store
Webinars
Newsletters
All Topics
The Big Idea
Data & Visuals
Reading Lists
Case Selections
HBR Learning
My Library
Account Settings
Log Out
Sign In
Your Cart
Visit Our Store
My Library
Topic Feeds
Orders
Account Settings
Email Preferences
Log Out
Reading Lists
1 free
s
last free article
Subscribe
Create an account
AI and machine learning
by
Eric Siegel
by
Eric Siegel
Illustration by Skizzomat
Tweet
Post
Share
Annotate
Save
Summary.
Machine learning has an âAIâ problem. With new breathtaking capabilities from generative AI released every several months â and AI hype escalating at an even higher rate â itâs high time we differentiate most of todayâs practical ML projects from those research advances. This begins by correctly naming such projects: Call them âML,â not âAI.â Including all ML initiatives under the âAIâ umbrella oversells and misleads, contributing to a high failure rate for ML business deployments. For most ML projects, the term âAIâ goes entirely too far â it alludes to human-level capabilities. In fact, when you unpack the meaning of âAI,â you discover just how overblown a buzzword it is: If it doesnât mean artificial general intelligence, a grandiose goal for technology, then it just doesnât mean anything at all.
Tweet
Post
Share
Annotate
Save
Leer en espanol
Ler em portugues
You might think that news of âmajor AI breakthroughsâ would do nothing but help machine learningâs (ML) adoption. If only. Even before the latest splashes â most notably OpenAIâs ChatGPT and other generative AI tools â the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. Thatâs because for most ML projects, the buzzword âAIâ goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations.
Most practical use cases of ML â designed to improve the efficiencies of existing business operations â innovate in fairly straightforward ways. Donât let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions , which is why itâs sometimes also called predictive analytics . This means real value, so long as you eschew false hype that it is âhighly accurate,â like a digital crystal ball.
This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. Itâs practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML.
Hereâs the problem: Most people conceive of ML as âAI.â This is a reasonable misunderstanding. But âAIâ suffers from an unrelenting, incurable case of vagueness â it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools âAIâ oversells what most ML business deployments actually do. In fact, you couldnât overpromise more than you do when you call something âAI.â The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do.
This exacerbates a significant problem with ML projects: They often lack a keen focus on their value â exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value . In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective.
What Does AI Actually Mean?
ââAI-poweredâ is techâs meaningless equivalent of âall natural.ââ
âDevin Coldewey, TechCrunch
AI cannot get away from AGI for two reasons. First, the term âAIâ is generally thrown around without clarifying whether weâre talking about AGI or narrow AI , a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials.
Second, thereâs no satisfactory way to define AI besides AGI. Defining âAIâ as something other than AGI has become a research challenge unto itself , albeit a quixotic one. If it doesnât mean AGI, it doesnât mean anything â other suggested definitions either fail to qualify as âintelligentâ in the ambitious spirit implied by âAIâ or fail to establish an objective goal. We face this conundrum whether trying to pinpoint 1) a definition for âAI,â 2) the criteria by which a computer would qualify as âintelligent,â or 3) a performance benchmark that would certify true AI. These three are one and the same.
The problem is with the word âintelligenceâ itself. When used to describe a machine, itâs relentlessly nebulous. Thatâs bad news if AI is meant to be a legitimate field. Engineering canât pursue an imprecise goal. If you canât define it, you canât build it. To develop an apparatus, you must be able to measure how good it is â how well it performs and how close you are to the goal â so that you know youâre making progress and so that you ultimately know when youâve succeeded in developing it.
In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle . AI means computers that do something smart (a circular definition). No, itâs intelligence demonstrated by machines (even more circular, if thatâs possible). Rather, itâs a system that employs certain advanced methodologies, such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (clearly, employing one or more of these methods doesnât automatically qualify a system as intelligent).
But surely a machine would qualify as intelligent if it seemed sufficiently humanlike, if you couldnât distinguish it from a human, say, by interrogating it in a chatroom â the famous Turing Test. But the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once â fool us twice, shame on humanity. Another reason that passing the Turing Test misses the mark is because thereâs limited value or utility in doing so. If AI could exist, certainly itâs supposed to be useful.
What if we define AI by what itâs capable of? For example, if we define AI as software that can perform a task so difficult that it traditionally requires a human, such as driving a car, mastering chess, or recognizing human faces. It turns out that this definition doesnât work either because, once a computer can do something, we tend to trivialized it. After all, computers can manage only mechanical tasks that are well-understood and well-specified. Once surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesnât seem âintelligentâ after all, at least not to the whole-hearted extent intended by the term âAI.â Once computers mastered chess, there was little sentiment that weâd âsolvedâ AI.
This paradox, known as The AI Effect , tells us that, if itâs possible, itâs not intelligent. Suffering from an ever-elusive objective, AI inadvertently equates to âgetting computers to do things too difficult for computers to doâ â artificial impossibility. No destination will satisfy once you arrive; AI categorically defies definition. With due irony, the computer science pioneer Larry Tesler famously suggested that we might as well define AI as âwhatever machines havenât done yet.â
Ironically, it was MLâs measurable success that hyped up AI in the first place. After all, improving measurable performance is supervised machine learning in a nutshell. The feedback from evaluating the system against a benchmark â such as a sample of labeled data â guides its next improvement. By doing so, ML delivers unprecedented value in countless ways. It has earned its title as âthe most important general-purpose technology of our era,â as Harvard Business Review put it. More than anything else, MLâs proven leaps and bounds have fueled AI hype.
All in with Artificial General Intelligence
âI predict we will see the third AI Winter within the next five years⌠When I graduated with my Ph.D. in AI and ML in â91, AI was literally a bad word. No company would consider hiring somebody who was in AI.â
âUsama Fayyad, June 23, 2022, speaking at Machine Learning Week
There is one way to overcome this definition dilemma: Go all in and define AI as AGI, software capable of any intellectual task humans can do. If this science fiction-sounding goal were achieved, I submit that there would be a strong argument that it qualified as âintelligent.â And itâs a measurable goal, at least in principle if not in practicality. For example, its developers could benchmark the system against a set of 1,000,000 tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee youâd just as well issue to a robot, and even brief, one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability.
AGI may set a clear-cut objective, but itâs out of this world â as unwieldy an ambition as there can be. Nobody knows if and when it could be achieved.
Therein lies the problem for typical ML projects. By calling them âAI,â we convey that they sit on the same spectrum as AGI, that theyâre built on technology that is actively inching along in that direction. âAIâ haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-ends projects left and right.
Itâs understandable that so many would want to claim a piece of the AI pie, if itâs made of the same ingredients as AGI. The wish fulfillment AGI promises â a kind of ultimate power â is so seductive that itâs nearly irresistible.
But thereâs a better way forward, one thatâs realistic and that I would argue is already exciting enough: running major operations â the main things we do as organizations â more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, weâve got to come down to earth. If your aim is to deliver operational value, donât buy âAIâ and donât sell âAI.â Say what you mean and mean what you say. If a technology consists of ML, letâs call it that.
Reports of the human mindâs looming obsolescence have been greatly exaggerated, which means another era of AI disillusionment is nigh. And, in the long run, we will continue to experience AI winters so long as we continue to hyperbolically apply the term âAI.â But if we tone down the âAIâ rhetoric â or otherwise differentiate ML from AI â we will properly insulate ML as an industry from the next AI Winter. This includes resisting the temptation to ride hype waves and refrain from passively affirming starry-eyed decision makers who appear to be bowing at the altar of an all-capable AI. Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of MLâs true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater.
This article is a product of the authorâs work as the Bodily Bicentennial Professor in Analytics at UVA Darden School of Business.
Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science
Eric Siegel , Ph.D. is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series, a frequent keynote speaker , and executive editor of The Machine Learning Times . Eric authored the forthcoming book The AI Playbook: Mastering the Rare Art of Machine Learning Deployment and the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die , which has been used in courses at hundreds of universities. He won the Distinguished Faculty award when he was a professor at Columbia University, where he taught the graduate courses in machine learning and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice .
Tweet
Post
Share
Annotate
Save
Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science
Partner Center
Diversity
Latest
Magazine
Ascend
Topics
Podcasts
Video
Store
The Big Idea
Data & Visuals
Case Selections
HBR Learning
Subscribe
Explore HBR
The Latest
All Topics
Magazine Archive
The Big Idea
Reading Lists
Case Selections
Video
Podcasts
Webinars
Data & Visuals
My Library
Newsletters
HBR Press
HBR Ascend
HBR Store
Article Reprints
Books
Cases
Collections
Magazine Issues
HBR Guide Series
HBR 20-Minute Managers
HBR Emotional Intelligence Series
HBR Must Reads
Tools
About HBR
Contact Us
Advertise with Us
Information for Booksellers/Retailers
Masthead
Global Editions
Media Inquiries
Guidelines for Authors
HBR Analytic Services
Copyright Permissions
Manage My Account
My Library
Topic Feeds
Orders
Account Settings
Email Preferences
Account FAQ
Help Center
Contact Customer Service
Follow HBR
Your Newsreader
About Us
Careers
Privacy Policy
Cookie Policy
Copyright Information
Trademark Policy
Higher Education
Corporate Learning
Harvard Business Review
Harvard Business School
Single Line Text
Navigation Menu. Subscribe. Sign In. Account Menu Account Menu Hi,  Guest. Search Menu. Close menu. CLEAR. SUGGESTED TOPICS. Explore HBR. Diversity. Latest. The Magazine. Ascend. Podcasts. Video. Store. Webinars. Newsletters. Popular Topics. Managing Yourself. Leadership. Strategy. Managing Teams. Gender. Innovation. Work-life Balance. All Topics. For Subscribers. The Big Idea. Data & Visuals. Reading Lists. Case Selections. HBR Learning. Subscribe. My Account. My Library. Topic Feeds. Orders. Account Settings. Email Preferences. Log Out. Sign In. Subscribe. Diversity. Latest. Podcasts. Video. The Magazine. Ascend. Store. Webinars. Newsletters. All Topics. The Big Idea. Data & Visuals. Reading Lists. Case Selections. HBR Learning. My Library. Account Settings. Log Out. Sign In. Your Cart. Visit Our Store. My Library. Topic Feeds. Orders. Account Settings. Email Preferences. Log Out. Reading Lists. 1 free. s. last free article. Subscribe. Create an account. AI and machine learning. by. Eric Siegel. by. Eric Siegel. Illustration by Skizzomat. Tweet. Post. Share. Annotate. Save. Print. Summary. Machine learning has an âAIâ problem. With new breathtaking capabilities from generative AI released every several months â and AI hype escalating at an even higher rate â itâs high time we differentiate most of todayâs practical ML projects from those research advances. This begins by correctly naming such projects: Call them âML,â not âAI.â Including all ML initiatives under the âAIâ umbrella oversells and misleads, contributing to a high failure rate for ML business deployments. For most ML projects, the term âAIâ goes entirely too far â it alludes to human-level capabilities. In fact, when you unpack the meaning of âAI,â you discover just how overblown a buzzword it is: If it doesnât mean artificial general intelligence, a grandiose goal for technology, then it just doesnât mean anything at all. Tweet. Post. Share. Annotate. Save. Print. Leer en espanol. Ler em portugues. You might think that news of âmajor AI breakthroughsâ would do nothing but help machine learningâs (ML) adoption. If only. Even before the latest splashes â most notably OpenAIâs ChatGPT and other generative AI tools â the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. Thatâs because for most ML projects, the buzzword âAIâ goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations. Most practical use cases of ML â designed to improve the efficiencies of existing business operations â innovate in fairly straightforward ways. Donât let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions , which is why itâs sometimes also called predictive analytics . This means real value, so long as you eschew false hype that it is âhighly accurate,â like a digital crystal ball. This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. Itâs practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML. Hereâs the problem: Most people conceive of ML as âAI.â This is a reasonable misunderstanding. But âAIâ suffers from an unrelenting, incurable case of vagueness â it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools âAIâ oversells what most ML business deployments actually do. In fact, you couldnât overpromise more than you do when you call something âAI.â The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do. This exacerbates a significant problem with ML projects: They often lack a keen focus on their value â exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value . In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective. What Does AI Actually Mean? ââAI-poweredâ is techâs meaningless equivalent of âall natural.ââ âDevin Coldewey, TechCrunch. AI cannot get away from AGI for two reasons. First, the term âAIâ is generally thrown around without clarifying whether weâre talking about AGI or narrow AI , a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials. Second, thereâs no satisfactory way to define AI besides AGI. Defining âAIâ as something other than AGI has become a research challenge unto itself , albeit a quixotic one. If it doesnât mean AGI, it doesnât mean anything â other suggested definitions either fail to qualify as âintelligentâ in the ambitious spirit implied by âAIâ or fail to establish an objective goal. We face this conundrum whether trying to pinpoint 1) a definition for âAI,â 2) the criteria by which a computer would qualify as âintelligent,â or 3) a performance benchmark that would certify true AI. These three are one and the same. The problem is with the word âintelligenceâ itself. When used to describe a machine, itâs relentlessly nebulous. Thatâs bad news if AI is meant to be a legitimate field. Engineering canât pursue an imprecise goal. If you canât define it, you canât build it. To develop an apparatus, you must be able to measure how good it is â how well it performs and how close you are to the goal â so that you know youâre making progress and so that you ultimately know when youâve succeeded in developing it. In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle . AI means computers that do something smart (a circular definition). No, itâs intelligence demonstrated by machines (even more circular, if thatâs possible). Rather, itâs a system that employs certain advanced methodologies, such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (clearly, employing one or more of these methods doesnât automatically qualify a system as intelligent). But surely a machine would qualify as intelligent if it seemed sufficiently humanlike, if you couldnât distinguish it from a human, say, by interrogating it in a chatroom â the famous Turing Test. But the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once â fool us twice, shame on humanity. Another reason that passing the Turing Test misses the mark is because thereâs limited value or utility in doing so. If AI could exist, certainly itâs supposed to be useful. What if we define AI by what itâs capable of? For example, if we define AI as software that can perform a task so difficult that it traditionally requires a human, such as driving a car, mastering chess, or recognizing human faces. It turns out that this definition doesnât work either because, once a computer can do something, we tend to trivialized it. After all, computers can manage only mechanical tasks that are well-understood and well-specified. Once surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesnât seem âintelligentâ after all, at least not to the whole-hearted extent intended by the term âAI.â Once computers mastered chess, there was little sentiment that weâd âsolvedâ AI. This paradox, known as The AI Effect , tells us that, if itâs possible, itâs not intelligent. Suffering from an ever-elusive objective, AI inadvertently equates to âgetting computers to do things too difficult for computers to doâ â artificial impossibility. No destination will satisfy once you arrive; AI categorically defies definition. With due irony, the computer science pioneer Larry Tesler famously suggested that we might as well define AI as âwhatever machines havenât done yet.â Ironically, it was MLâs measurable success that hyped up AI in the first place. After all, improving measurable performance is supervised machine learning in a nutshell. The feedback from evaluating the system against a benchmark â such as a sample of labeled data â guides its next improvement. By doing so, ML delivers unprecedented value in countless ways. It has earned its title as âthe most important general-purpose technology of our era,â as Harvard Business Review put it. More than anything else, MLâs proven leaps and bounds have fueled AI hype. All in with Artificial General Intelligence. âI predict we will see the third AI Winter within the next five years⌠When I graduated with my Ph.D. in AI and ML in â91, AI was literally a bad word. No company would consider hiring somebody who was in AI.â âUsama Fayyad, June 23, 2022, speaking at Machine Learning Week. There is one way to overcome this definition dilemma: Go all in and define AI as AGI, software capable of any intellectual task humans can do. If this science fiction-sounding goal were achieved, I submit that there would be a strong argument that it qualified as âintelligent.â And itâs a measurable goal, at least in principle if not in practicality. For example, its developers could benchmark the system against a set of 1,000,000 tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee youâd just as well issue to a robot, and even brief, one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability. AGI may set a clear-cut objective, but itâs out of this world â as unwieldy an ambition as there can be. Nobody knows if and when it could be achieved. Therein lies the problem for typical ML projects. By calling them âAI,â we convey that they sit on the same spectrum as AGI, that theyâre built on technology that is actively inching along in that direction. âAIâ haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-ends projects left and right. Itâs understandable that so many would want to claim a piece of the AI pie, if itâs made of the same ingredients as AGI. The wish fulfillment AGI promises â a kind of ultimate power â is so seductive that itâs nearly irresistible. But thereâs a better way forward, one thatâs realistic and that I would argue is already exciting enough: running major operations â the main things we do as organizations â more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, weâve got to come down to earth. If your aim is to deliver operational value, donât buy âAIâ and donât sell âAI.â Say what you mean and mean what you say. If a technology consists of ML, letâs call it that. Reports of the human mindâs looming obsolescence have been greatly exaggerated, which means another era of AI disillusionment is nigh. And, in the long run, we will continue to experience AI winters so long as we continue to hyperbolically apply the term âAI.â But if we tone down the âAIâ rhetoric â or otherwise differentiate ML from AI â we will properly insulate ML as an industry from the next AI Winter. This includes resisting the temptation to ride hype waves and refrain from passively affirming starry-eyed decision makers who appear to be bowing at the altar of an all-capable AI. Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of MLâs true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater. This article is a product of the authorâs work as the Bodily Bicentennial Professor in Analytics at UVA Darden School of Business. Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science. Eric Siegel , Ph.D. is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series, a frequent keynote speaker , and executive editor of The Machine Learning Times . Eric authored the forthcoming book The AI Playbook: Mastering the Rare Art of Machine Learning Deployment and the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die , which has been used in courses at hundreds of universities. He won the Distinguished Faculty award when he was a professor at Columbia University, where he taught the graduate courses in machine learning and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice . Tweet. Post. Share. Annotate. Save. Print. Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science. Partner Center. Diversity. Latest. Magazine. Ascend. Topics. Podcasts. Video. Store. The Big Idea. Data & Visuals. Case Selections. HBR Learning. Subscribe. Explore HBR. The Latest. All Topics. Magazine Archive. The Big Idea. Reading Lists. Case Selections. Video. Podcasts. Webinars. Data & Visuals. My Library. Newsletters. HBR Press. HBR Ascend. HBR Store. Article Reprints. Books. Cases. Collections. Magazine Issues. HBR Guide Series. HBR 20-Minute Managers. HBR Emotional Intelligence Series. HBR Must Reads. Tools. About HBR. Contact Us. Advertise with Us. Information for Booksellers/Retailers. Masthead. Global Editions. Media Inquiries. Guidelines for Authors. HBR Analytic Services. Copyright Permissions. Manage My Account. My Library. Topic Feeds. Orders. Account Settings. Email Preferences. Account FAQ. Help Center. Contact Customer Service. Follow HBR. Facebook. Twitter. LinkedIn. Instagram. Your Newsreader. About Us. Careers. Privacy Policy. Cookie Policy. Copyright Information. Trademark Policy. Higher Education. Corporate Learning. Harvard Business Review. Harvard Business School.