The AI Hype Cycle Is Distracting Companies

Raw Text

Navigation Menu

Subscribe

Sign In

Account Menu Account Menu Hi,  Guest

Search Menu

Close menu

CLEAR

SUGGESTED TOPICS

Explore HBR

Diversity

Latest

The Magazine

Ascend

Podcasts

Video

Store

Webinars

Newsletters

Popular Topics

Managing Yourself

Leadership

Strategy

Managing Teams

Gender

Innovation

Work-life Balance

All Topics

For Subscribers

The Big Idea

Data & Visuals

Reading Lists

Case Selections

HBR Learning

Subscribe

My Account

My Library

Topic Feeds

Orders

Account Settings

Email Preferences

Log Out

Sign In

Subscribe

Diversity

Latest

Podcasts

Video

The Magazine

Ascend

Store

Webinars

Newsletters

All Topics

The Big Idea

Data & Visuals

Reading Lists

Case Selections

HBR Learning

My Library

Account Settings

Log Out

Sign In

Your Cart

Visit Our Store

My Library

Topic Feeds

Orders

Account Settings

Email Preferences

Log Out

Reading Lists

1 free

s

last free article

Subscribe

Create an account

AI and machine learning

by

Eric Siegel

by

Eric Siegel

Illustration by Skizzomat

Tweet

Post

Share

Annotate

Save

Print

Summary.

Machine learning has an “AI” problem. With new breathtaking capabilities from generative AI released every several months — and AI hype escalating at an even higher rate — it’s high time we differentiate most of today’s practical ML projects from those research advances. This begins by correctly naming such projects: Call them “ML,” not “AI.” Including all ML initiatives under the “AI” umbrella oversells and misleads, contributing to a high failure rate for ML business deployments. For most ML projects, the term “AI” goes entirely too far — it alludes to human-level capabilities. In fact, when you unpack the meaning of “AI,” you discover just how overblown a buzzword it is: If it doesn’t mean artificial general intelligence, a grandiose goal for technology, then it just doesn’t mean anything at all.

Tweet

Post

Share

Annotate

Save

Print

Leer en espanol

Ler em portugues

You might think that news of “major AI breakthroughs” would do nothing but help machine learning’s (ML) adoption. If only. Even before the latest splashes — most notably OpenAI’s ChatGPT and other generative AI tools — the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. That’s because for most ML projects, the buzzword “AI” goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations.

Most practical use cases of ML — designed to improve the efficiencies of existing business operations — innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions , which is why it’s sometimes also called predictive analytics . This means real value, so long as you eschew false hype that it is “highly accurate,” like a digital crystal ball.

This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. It’s practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML.

Here’s the problem: Most people conceive of ML as “AI.” This is a reasonable misunderstanding. But “AI” suffers from an unrelenting, incurable case of vagueness — it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools “AI” oversells what most ML business deployments actually do. In fact, you couldn’t overpromise more than you do when you call something “AI.” The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do.

This exacerbates a significant problem with ML projects: They often lack a keen focus on their value — exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value . In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective.

What Does AI Actually Mean?

“‘AI-powered’ is tech’s meaningless equivalent of ‘all natural.’”

–Devin Coldewey, TechCrunch

AI cannot get away from AGI for two reasons. First, the term “AI” is generally thrown around without clarifying whether we’re talking about AGI or narrow AI , a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials.

Second, there’s no satisfactory way to define AI besides AGI. Defining “AI” as something other than AGI has become a research challenge unto itself , albeit a quixotic one. If it doesn’t mean AGI, it doesn’t mean anything — other suggested definitions either fail to qualify as “intelligent” in the ambitious spirit implied by “AI” or fail to establish an objective goal. We face this conundrum whether trying to pinpoint 1) a definition for “AI,” 2) the criteria by which a computer would qualify as “intelligent,” or 3) a performance benchmark that would certify true AI. These three are one and the same.

The problem is with the word “intelligence” itself. When used to describe a machine, it’s relentlessly nebulous. That’s bad news if AI is meant to be a legitimate field. Engineering can’t pursue an imprecise goal. If you can’t define it, you can’t build it. To develop an apparatus, you must be able to measure how good it is — how well it performs and how close you are to the goal — so that you know you’re making progress and so that you ultimately know when you’ve succeeded in developing it.

In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle . AI means computers that do something smart (a circular definition). No, it’s intelligence demonstrated by machines (even more circular, if that’s possible). Rather, it’s a system that employs certain advanced methodologies, such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (clearly, employing one or more of these methods doesn’t automatically qualify a system as intelligent).

But surely a machine would qualify as intelligent if it seemed sufficiently humanlike, if you couldn’t distinguish it from a human, say, by interrogating it in a chatroom — the famous Turing Test. But the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once — fool us twice, shame on humanity. Another reason that passing the Turing Test misses the mark is because there’s limited value or utility in doing so. If AI could exist, certainly it’s supposed to be useful.

What if we define AI by what it’s capable of? For example, if we define AI as software that can perform a task so difficult that it traditionally requires a human, such as driving a car, mastering chess, or recognizing human faces. It turns out that this definition doesn’t work either because, once a computer can do something, we tend to trivialized it. After all, computers can manage only mechanical tasks that are well-understood and well-specified. Once surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesn’t seem “intelligent” after all, at least not to the whole-hearted extent intended by the term “AI.” Once computers mastered chess, there was little sentiment that we’d “solved” AI.

This paradox, known as The AI Effect , tells us that, if it’s possible, it’s not intelligent. Suffering from an ever-elusive objective, AI inadvertently equates to “getting computers to do things too difficult for computers to do” — artificial impossibility. No destination will satisfy once you arrive; AI categorically defies definition. With due irony, the computer science pioneer Larry Tesler famously suggested that we might as well define AI as “whatever machines haven’t done yet.”

Ironically, it was ML’s measurable success that hyped up AI in the first place. After all, improving measurable performance is supervised machine learning in a nutshell. The feedback from evaluating the system against a benchmark — such as a sample of labeled data — guides its next improvement. By doing so, ML delivers unprecedented value in countless ways. It has earned its title as “the most important general-purpose technology of our era,” as Harvard Business Review put it. More than anything else, ML’s proven leaps and bounds have fueled AI hype.

All in with Artificial General Intelligence

“I predict we will see the third AI Winter within the next five years… When I graduated with my Ph.D. in AI and ML in ’91, AI was literally a bad word. No company would consider hiring somebody who was in AI.”

–Usama Fayyad, June 23, 2022, speaking at Machine Learning Week

There is one way to overcome this definition dilemma: Go all in and define AI as AGI, software capable of any intellectual task humans can do. If this science fiction-sounding goal were achieved, I submit that there would be a strong argument that it qualified as “intelligent.” And it’s a measurable goal, at least in principle if not in practicality. For example, its developers could benchmark the system against a set of 1,000,000 tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee you’d just as well issue to a robot, and even brief, one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability.

AGI may set a clear-cut objective, but it’s out of this world — as unwieldy an ambition as there can be. Nobody knows if and when it could be achieved.

Therein lies the problem for typical ML projects. By calling them “AI,” we convey that they sit on the same spectrum as AGI, that they’re built on technology that is actively inching along in that direction. “AI” haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-ends projects left and right.

It’s understandable that so many would want to claim a piece of the AI pie, if it’s made of the same ingredients as AGI. The wish fulfillment AGI promises — a kind of ultimate power — is so seductive that it’s nearly irresistible.

But there’s a better way forward, one that’s realistic and that I would argue is already exciting enough: running major operations — the main things we do as organizations — more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, we’ve got to come down to earth. If your aim is to deliver operational value, don’t buy “AI” and don’t sell “AI.” Say what you mean and mean what you say. If a technology consists of ML, let’s call it that.

Reports of the human mind’s looming obsolescence have been greatly exaggerated, which means another era of AI disillusionment is nigh. And, in the long run, we will continue to experience AI winters so long as we continue to hyperbolically apply the term “AI.” But if we tone down the “AI” rhetoric — or otherwise differentiate ML from AI — we will properly insulate ML as an industry from the next AI Winter. This includes resisting the temptation to ride hype waves and refrain from passively affirming starry-eyed decision makers who appear to be bowing at the altar of an all-capable AI. Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of ML’s true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater.

This article is a product of the author’s work as the Bodily Bicentennial Professor in Analytics at UVA Darden School of Business.

Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science

Eric Siegel , Ph.D. is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series, a frequent keynote speaker , and executive editor of The Machine Learning Times . Eric authored the forthcoming book The AI Playbook: Mastering the Rare Art of Machine Learning Deployment and the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die , which has been used in courses at hundreds of universities. He won the Distinguished Faculty award when he was a professor at Columbia University, where he taught the graduate courses in machine learning and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice .

Tweet

Post

Share

Annotate

Save

Print

Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science

Partner Center

Diversity

Latest

Magazine

Ascend

Topics

Podcasts

Video

Store

The Big Idea

Data & Visuals

Case Selections

HBR Learning

Subscribe

Explore HBR

The Latest

All Topics

Magazine Archive

The Big Idea

Reading Lists

Case Selections

Video

Podcasts

Webinars

Data & Visuals

My Library

Newsletters

HBR Press

HBR Ascend

HBR Store

Article Reprints

Books

Cases

Collections

Magazine Issues

HBR Guide Series

HBR 20-Minute Managers

HBR Emotional Intelligence Series

HBR Must Reads

Tools

About HBR

Contact Us

Advertise with Us

Information for Booksellers/Retailers

Masthead

Global Editions

Media Inquiries

Guidelines for Authors

HBR Analytic Services

Copyright Permissions

Manage My Account

My Library

Topic Feeds

Orders

Account Settings

Email Preferences

Account FAQ

Help Center

Contact Customer Service

Follow HBR

Facebook

Twitter

LinkedIn

Instagram

Your Newsreader

About Us

Careers

Privacy Policy

Cookie Policy

Copyright Information

Trademark Policy

Higher Education

Corporate Learning

Harvard Business Review

Harvard Business School

Single Line Text

Navigation Menu. Subscribe. Sign In. Account Menu Account Menu Hi,  Guest. Search Menu. Close menu. CLEAR. SUGGESTED TOPICS. Explore HBR. Diversity. Latest. The Magazine. Ascend. Podcasts. Video. Store. Webinars. Newsletters. Popular Topics. Managing Yourself. Leadership. Strategy. Managing Teams. Gender. Innovation. Work-life Balance. All Topics. For Subscribers. The Big Idea. Data & Visuals. Reading Lists. Case Selections. HBR Learning. Subscribe. My Account. My Library. Topic Feeds. Orders. Account Settings. Email Preferences. Log Out. Sign In. Subscribe. Diversity. Latest. Podcasts. Video. The Magazine. Ascend. Store. Webinars. Newsletters. All Topics. The Big Idea. Data & Visuals. Reading Lists. Case Selections. HBR Learning. My Library. Account Settings. Log Out. Sign In. Your Cart. Visit Our Store. My Library. Topic Feeds. Orders. Account Settings. Email Preferences. Log Out. Reading Lists. 1 free. s. last free article. Subscribe. Create an account. AI and machine learning. by. Eric Siegel. by. Eric Siegel. Illustration by Skizzomat. Tweet. Post. Share. Annotate. Save. Print. Summary. Machine learning has an “AI” problem. With new breathtaking capabilities from generative AI released every several months — and AI hype escalating at an even higher rate — it’s high time we differentiate most of today’s practical ML projects from those research advances. This begins by correctly naming such projects: Call them “ML,” not “AI.” Including all ML initiatives under the “AI” umbrella oversells and misleads, contributing to a high failure rate for ML business deployments. For most ML projects, the term “AI” goes entirely too far — it alludes to human-level capabilities. In fact, when you unpack the meaning of “AI,” you discover just how overblown a buzzword it is: If it doesn’t mean artificial general intelligence, a grandiose goal for technology, then it just doesn’t mean anything at all. Tweet. Post. Share. Annotate. Save. Print. Leer en espanol. Ler em portugues. You might think that news of “major AI breakthroughs” would do nothing but help machine learning’s (ML) adoption. If only. Even before the latest splashes — most notably OpenAI’s ChatGPT and other generative AI tools — the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. That’s because for most ML projects, the buzzword “AI” goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations. Most practical use cases of ML — designed to improve the efficiencies of existing business operations — innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions , which is why it’s sometimes also called predictive analytics . This means real value, so long as you eschew false hype that it is “highly accurate,” like a digital crystal ball. This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. It’s practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML. Here’s the problem: Most people conceive of ML as “AI.” This is a reasonable misunderstanding. But “AI” suffers from an unrelenting, incurable case of vagueness — it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools “AI” oversells what most ML business deployments actually do. In fact, you couldn’t overpromise more than you do when you call something “AI.” The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do. This exacerbates a significant problem with ML projects: They often lack a keen focus on their value — exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value . In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective. What Does AI Actually Mean? “‘AI-powered’ is tech’s meaningless equivalent of ‘all natural.’” –Devin Coldewey, TechCrunch. AI cannot get away from AGI for two reasons. First, the term “AI” is generally thrown around without clarifying whether we’re talking about AGI or narrow AI , a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials. Second, there’s no satisfactory way to define AI besides AGI. Defining “AI” as something other than AGI has become a research challenge unto itself , albeit a quixotic one. If it doesn’t mean AGI, it doesn’t mean anything — other suggested definitions either fail to qualify as “intelligent” in the ambitious spirit implied by “AI” or fail to establish an objective goal. We face this conundrum whether trying to pinpoint 1) a definition for “AI,” 2) the criteria by which a computer would qualify as “intelligent,” or 3) a performance benchmark that would certify true AI. These three are one and the same. The problem is with the word “intelligence” itself. When used to describe a machine, it’s relentlessly nebulous. That’s bad news if AI is meant to be a legitimate field. Engineering can’t pursue an imprecise goal. If you can’t define it, you can’t build it. To develop an apparatus, you must be able to measure how good it is — how well it performs and how close you are to the goal — so that you know you’re making progress and so that you ultimately know when you’ve succeeded in developing it. In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle . AI means computers that do something smart (a circular definition). No, it’s intelligence demonstrated by machines (even more circular, if that’s possible). Rather, it’s a system that employs certain advanced methodologies, such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (clearly, employing one or more of these methods doesn’t automatically qualify a system as intelligent). But surely a machine would qualify as intelligent if it seemed sufficiently humanlike, if you couldn’t distinguish it from a human, say, by interrogating it in a chatroom — the famous Turing Test. But the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once — fool us twice, shame on humanity. Another reason that passing the Turing Test misses the mark is because there’s limited value or utility in doing so. If AI could exist, certainly it’s supposed to be useful. What if we define AI by what it’s capable of? For example, if we define AI as software that can perform a task so difficult that it traditionally requires a human, such as driving a car, mastering chess, or recognizing human faces. It turns out that this definition doesn’t work either because, once a computer can do something, we tend to trivialized it. After all, computers can manage only mechanical tasks that are well-understood and well-specified. Once surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesn’t seem “intelligent” after all, at least not to the whole-hearted extent intended by the term “AI.” Once computers mastered chess, there was little sentiment that we’d “solved” AI. This paradox, known as The AI Effect , tells us that, if it’s possible, it’s not intelligent. Suffering from an ever-elusive objective, AI inadvertently equates to “getting computers to do things too difficult for computers to do” — artificial impossibility. No destination will satisfy once you arrive; AI categorically defies definition. With due irony, the computer science pioneer Larry Tesler famously suggested that we might as well define AI as “whatever machines haven’t done yet.” Ironically, it was ML’s measurable success that hyped up AI in the first place. After all, improving measurable performance is supervised machine learning in a nutshell. The feedback from evaluating the system against a benchmark — such as a sample of labeled data — guides its next improvement. By doing so, ML delivers unprecedented value in countless ways. It has earned its title as “the most important general-purpose technology of our era,” as Harvard Business Review put it. More than anything else, ML’s proven leaps and bounds have fueled AI hype. All in with Artificial General Intelligence. “I predict we will see the third AI Winter within the next five years… When I graduated with my Ph.D. in AI and ML in ’91, AI was literally a bad word. No company would consider hiring somebody who was in AI.” –Usama Fayyad, June 23, 2022, speaking at Machine Learning Week. There is one way to overcome this definition dilemma: Go all in and define AI as AGI, software capable of any intellectual task humans can do. If this science fiction-sounding goal were achieved, I submit that there would be a strong argument that it qualified as “intelligent.” And it’s a measurable goal, at least in principle if not in practicality. For example, its developers could benchmark the system against a set of 1,000,000 tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee you’d just as well issue to a robot, and even brief, one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability. AGI may set a clear-cut objective, but it’s out of this world — as unwieldy an ambition as there can be. Nobody knows if and when it could be achieved. Therein lies the problem for typical ML projects. By calling them “AI,” we convey that they sit on the same spectrum as AGI, that they’re built on technology that is actively inching along in that direction. “AI” haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-ends projects left and right. It’s understandable that so many would want to claim a piece of the AI pie, if it’s made of the same ingredients as AGI. The wish fulfillment AGI promises — a kind of ultimate power — is so seductive that it’s nearly irresistible. But there’s a better way forward, one that’s realistic and that I would argue is already exciting enough: running major operations — the main things we do as organizations — more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, we’ve got to come down to earth. If your aim is to deliver operational value, don’t buy “AI” and don’t sell “AI.” Say what you mean and mean what you say. If a technology consists of ML, let’s call it that. Reports of the human mind’s looming obsolescence have been greatly exaggerated, which means another era of AI disillusionment is nigh. And, in the long run, we will continue to experience AI winters so long as we continue to hyperbolically apply the term “AI.” But if we tone down the “AI” rhetoric — or otherwise differentiate ML from AI — we will properly insulate ML as an industry from the next AI Winter. This includes resisting the temptation to ride hype waves and refrain from passively affirming starry-eyed decision makers who appear to be bowing at the altar of an all-capable AI. Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of ML’s true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater. This article is a product of the author’s work as the Bodily Bicentennial Professor in Analytics at UVA Darden School of Business. Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science. Eric Siegel , Ph.D. is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series, a frequent keynote speaker , and executive editor of The Machine Learning Times . Eric authored the forthcoming book The AI Playbook: Mastering the Rare Art of Machine Learning Deployment and the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die , which has been used in courses at hundreds of universities. He won the Distinguished Faculty award when he was a professor at Columbia University, where he taught the graduate courses in machine learning and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice . Tweet. Post. Share. Annotate. Save. Print. Read more on AI and machine learning or related topics Automation , Technology and analytics , Algorithms and Analytics and data science. Partner Center. Diversity. Latest. Magazine. Ascend. Topics. Podcasts. Video. Store. The Big Idea. Data & Visuals. Case Selections. HBR Learning. Subscribe. Explore HBR. The Latest. All Topics. Magazine Archive. The Big Idea. Reading Lists. Case Selections. Video. Podcasts. Webinars. Data & Visuals. My Library. Newsletters. HBR Press. HBR Ascend. HBR Store. Article Reprints. Books. Cases. Collections. Magazine Issues. HBR Guide Series. HBR 20-Minute Managers. HBR Emotional Intelligence Series. HBR Must Reads. Tools. About HBR. Contact Us. Advertise with Us. Information for Booksellers/Retailers. Masthead. Global Editions. Media Inquiries. Guidelines for Authors. HBR Analytic Services. Copyright Permissions. Manage My Account. My Library. Topic Feeds. Orders. Account Settings. Email Preferences. Account FAQ. Help Center. Contact Customer Service. Follow HBR. Facebook. Twitter. LinkedIn. Instagram. Your Newsreader. About Us. Careers. Privacy Policy. Cookie Policy. Copyright Information. Trademark Policy. Higher Education. Corporate Learning. Harvard Business Review. Harvard Business School.