<p class=cs-document-type>Flagship</p>
# Industry and Society Need AI Clarity<span class=cs-invisible>:</span> <span class=cs-subtitle>The opportunity in today’s confused and divided AI landscape</span>
<p class=cs-byline>David Truog</p>
<p class=cs-dateline>11 Mar 2026 • Updated 19 Apr 2026</p>
<p class=cs-reading-time>11–14 min read</p>
<p class=cs-dek>
Although people’s predictions about AI diverge, we mostly agree on the outcomes we’d prefer. But widespread confusion about AI stands in the way. Companies committed to AI have a business opportunity to differentiate and lead by creating clarity.
</p>
## We’re in a storm of division about AI
Opinions are polarized about how AI will impact economies, organizations, and individuals — ranging from enthusiasm and excitement to deep concern.
### Prominent voices clash
> [!research-figure]
> ![[_fig-1-fei-fei-li.jpg|Portrait of Fei-Fei Li]]
>
> Fei-Fei Li [^1]
Predictions run the gamut, among CEOs of technology companies and AI research luminaries. They include (see [[The Polarized Predictions About AI]]):
- **Utopians.** Mark Andreessen: humanity will be “able to take on new challenges \[…\], from curing all diseases to achieving interstellar travel” with AI.
- **Dystopians.** Elon Musk: AI poses “a fundamental risk to the existence of human civilization.”
- **Ambivalents.** Sam Altman: “I think the good case is just so unbelievably good that you sound like a really crazy person to start talking about it. \[…\] The bad case \[…\] is, like, lights out for all of us.
- **Moderates.** Fei-Fei Li: “My disappointment is the hyperbole on both sides.”
### Public opinion is split, too
People outside the industry, too, are divided in their opinions about the impact of AI:
- **About the long term.** Pew Research Center found that 17% of US adults say the impact of AI over the next 20 years will be “positive” whereas 35% say it will be “negative.”[^2] In the middle, 33% say it will be equally both, and 16% are not sure.
- **About the near-term.** Expectations are divided about the nearer term, too, though leaning more negative: 50% of US adults say they’re “more concerned than excited” about the increased use of AI in daily life, and only 10% are mainly “excited.”[^3]
- **Around the world.** The split — and even ambivalence — is not just in the US. Among the 25 countries Pew surveyed, Germany and Japan are the countries with the largest proportion who say they’re “equally concerned and excited” (53% and 55%). Israelis and South Koreans are the most “excited” (29% and 22%, respectively) but plenty of them disagree: South Korea has the smallest “concerned” cohort among the 25 countries, but even there, 16% are “concerned.”
### It’s causing hesitation, discord, anxiety — even dread
Why does this division about AI matter? Because it’s causing:
- **Uncertainty within organizations.** Decision-makers ranging from the executive ranks to functional teams debate whether their AI initiatives in progress are too timid or too ambitious.
- **Dissension about regulation.** Leaders — including technology CEOs, venture capitalists, national and local government officials, economists, and ethicists — argue over whether and how to regulate AI.
- **Doubt among investors.** Institutional and individual stockholders are concerned. Will AI-related shares keep rising or is a bubble about to burst? Will stocks in industries that seem threatened by AI keep gradually sinking, or will they recover?
- **Fear from individuals and families.** Seasoned professionals and freshly minted graduates are anxious about AI’s impact on jobs, as are university students. And parents worry about how AI will shape their children’s futures.
## The structure of the storm
To overcome this storm of division, we need to begin by understanding its structure — it has been:
- **Started and sustained** by the fact that AI is already both helping and hurting individuals and organizations.
- **Intensified** by growing recognition that although most organizations have AI efforts underway, few are paying off so far.
- **Supercharged** by widespread confusion about AI — insufficient understanding, incorrect beliefs, or both.
Let’s examine these three dynamics one at a time.
### AI’s mixed impacts started and sustain it
People’s divergent views about our AI future aren’t empty speculation. They’re extrapolations from observed impacts of past and current AI on the world today. And those consequences truly are a mixed bag: positive, negative, and sometimes both from a single cause. Spanning each of these three categories, here are just a few examples:
- **AI is delivering tangible benefits.** It helps oncologists diagnose patients, businesses forecast demand, and drivers avoid collisions. It helps knowledge workers summarize long documents and draft emails. It helps creative professionals brainstorm ideas, UX designers quickly create prototypes of apps and websites, and software developers write code.
- **AI is also contributing to serious harms.** It has misguided doctors’ decisions by summarizing clinical notes incorrectly. It’s caused companies to miss out on talented job candidates due to historical bias in training data. And it’s helped election manipulators mislead voters with deepfake videos. Some of the problems are avoidable through mitigation measures — but damage has already been done.
- **Some of AI’s effects are double-edged.** In some cases an impact can be seen either way — as positive or negative. AI has eliminated (or reduced) the need for people to do certain types of work. Some employers have welcomed the opportunity to get rid of a portion of their staff, as a boon to the bottom line. But most people whose jobs have been eliminated see their terminations differently.
### ROI disappointment intensifies it
Among these areas of mixed impact, the one that’s most concerning to business decision-makers has to do with the ROI of AI.
McKinsey found that 88% of the 1,993 respondents to its enterprise AI survey are now using AI in at least one business function, up from 55% just two years prior,[^4] mostly because of a surge in genAI (generative AI) adoption over that time.[^5] And 20% are using it in fully five or more business functions.[^6]
Those energetic investments in AI, however, are now facing headwinds.
> [!research-figure]
> ![[_fig-2-satya-nadella.jpg|Portrait of Satya Nadella]]
>
> Satya Nadella [^7]
- **Some say AI will fuel economic supergrowth.** Many technology leaders, venture capitalists, and others agree that, in the words of Microsoft CEO Satya Nadella at Davos 2026, AI will “bend the productivity curve and bring local surplus and economic growth all around the world.”[^8]
- **But most enterprises are disappointed so far.** AI’s business impact to date is not encouraging overall, according to leading business consultancies and academic analyses. For example, McKinsey found that only 6% of its respondents are what it calls “high performers” — those who report that their organizations have seen “significant” value from using AI and that at least 5% of their earnings can be attributed to AI.[^9] BCG reports that among AI decision-makers at 1,250 firms it surveyed, only 5% are achieving AI value at scale, and “60% of companies are not achieving material value at all.”[^10] A Stanford study observes “cost reductions and revenue increases \[…\] but most commonly at low levels.”[^11] And an MIT study concluded that “despite $30–40 billion in enterprise investment into genAI, \[…\] 95% of organizations are getting zero return.”[^12]
![[_fig-3-mckinsey-and-mit-data.png|This image shows two charts that visually illustrate the findings from McKinsey and MIT that are also mentioned in the body text of section.]]
### A vortex of confusion supercharges it
Does the fact that so many people have opinions about AI indicate they understand it? No. But that’s natural since, as Daniel Kahneman concluded from decades of research, our beliefs are often based on limited — and inaccurate — information:
> You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you \[…\]. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle.[^13]
In the US, 98% have opinions about AI, and 90% to 100% do in most other countries Pew surveyed.[^14] But lack of understanding — and *mis*understanding — of AI is common in the general population and inside enterprises:
- **The public doesn’t know much about AI.** Among US adults asked how much they’ve heard or read about AI, fully 53% say only “a little” or “nothing at all” according to Pew.[^15] Worldwide, the median across 25 countries for the share of people who say they’ve heard or read about AI “a lot” is only 34%. And even among them, having heard or read about AI “a lot” doesn’t guarantee comprehension, since descriptions of AI are often off the mark in popular sources like mainstream media outlets and social platforms.
- **Executives and other employees lack understanding, too.** Among senior decision-makers at US-based large enterprises, 23% are not even “somewhat familiar” with genAI as of 2025, according to a Wharton study.[^16] A Gartner survey, too, suggests that the C-suite’s comfort level with AI is weak: “very low” or “low” for 39%,” and “moderate” for 34%.[^17] Even among CIOs, only 44% are “AI-savvy” according to their own CEOs.[^18] You might think tech companies, in contrast, would have strong AI proficiency — but even in that industry, where KPMG fielded a survey to 230 US CEOs, fully 45% indicated their employees’ technical capability and skills are impeding AI roll-outs.[^19]
- **Even when we believe we understand, we overestimate how well.** To gauge AI literacy, it’s common for an enterprise to field a survey to its workforce or for a consultancy to survey business professionals industry-wide. But they mostly rely on respondents rating their expertise themselves — and that’s unreliable. A widely cited meta-analysis concluded that “self-assessments of knowledge are only moderately related to cognitive learning and are strongly related to affective evaluation outcomes” (specifically motivation and satisfaction).[^20] This is especially true of AI because, as another study found, “people feel they understand complex phenomena with far greater precision, coherence, and depth than they really do.”[^21]
### GenAI worsens the confusion
GenAI was familiar to AI professionals as early as around 2010 but it was not until the 2022 release of ChatGPT that genAI entered the public spotlight, sparking amazement and even wonder — but also confusion and misguided expectations.
What’s stoking bewilderment most now is that many of genAI’s limitations are inherent to the technology, not fixable defects — as experts like Dario Amodei, Andrej Karpathy, and Yann LeCun have pointed out. (Read their reflections about this in [[LLMs Don't Run on Facts or Logic]].)
AI was already hard for most people to understand — genAI made it even harder.
## How to tame the storm
Most stakeholders — businesses, regulators, investors, professionals, and others — have an interest in getting past the uncertainty, dissension, doubt, and fear that the storm is causing.
### From prediction to action
Taming the storm requires a shift in mindset, as a starting point. We need to:
- **Recognize the divergence is about predictions, not preferences.** Given a choice between Andreessen’s prediction that AI will lead to [[The Polarized Predictions About AI#Marc Andreessen|curing all diseases]] and Musk’s that it could mean the [[The Polarized Predictions About AI#Elon Musk|end of civilization]], we all would favor the former — even Musk. Prognosticating a negative outcome doesn’t mean wanting it to come true.
- **Take action instead of taking sides.** “The best way to predict the future is to invent it,” as computer scientist Alan Kay (a pioneer of graphical user interfaces and object-oriented programming) pointed out.[^22] By investing less in pondering predictions about the future and more in taking action in the present, we make the positive outcomes most of us want more likely.
### Clarity creates agency
> [!research-figure]
> ![[_fig-4-tristan-harris.jpg|Portrait of Tristan Harris]]
>
> Tristan Harris [^23]
The obvious question then is: which actions can organizations and individuals take to steer us toward those positive outcomes? Without answers to that question, businesses, consumers, and governments lack agency. The result: hesitation, backtracking, flailing, and even decision paralysis, in the face of uncertainty.
For answers to this question of which actions to take, we need clear understanding. As Tristan Harris pointed out in his talk at TED2025 about the rise of AI: “Clarity creates agency.” He then elaborated:
> Your role in this is not to solve the whole problem. But your role in this is to be part of the collective immune system. […] There is no room of adults working secretly to make sure that this turns out OK. We are the adults. We have to be.[^24]
### The business opportunity
Among the “adults” Harris refers to, some have more power than others — and therefore more responsibility and opportunity.
- **Companies leading the AI industry wield the most power over clarity.** Enterprises with AI efforts underway do have an important role to play: increasing clarity about AI among their employees and customers. And governments and academia do, too, in boosting public understanding. But the pragmatic reality is that the greatest resources and influence over driving AI clarity are in the hands of AI technology companies.
- **The AI industry faces a moment of reckoning.** It’s a fork in the road for AI technology providers. By investing in increasing AI clarity, they can steer industry and society toward potentially transformative positive outcomes. If they don’t, public confusion, missteps, and misuses will slow — and possibly even stand in the way of — AI fulfilling its promise.
- **The clarity gap is an opportunity to differentiate and lead.** AI companies that seize the moment will earn the trust and credibility necessary to fuel their next wave of growth.
### Becoming a clarity leader
Most people drive cars just fine without knowing much about what's under the hood. But you do need to know 1) how the car *behaves* — that braking distance increases with speed and that traction drops on wet roads, for example — and 2) how to *drive* it well in varied conditions.
> [!research-figure]
> ![[_fig-5-dario-amodei.jpg|Portrait of Dario Amodei]]
>
> Dario Amodei [^25]
The same is true of AI. But as Anthropic CEO Dario Amodei has candidly stated:
> People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.[^26]
It’s as if automobile manufacturers were selling cars without understanding much about how engines work.
That recognition is motivating research efforts among some industry leaders into interpretability, alignment, and safety. AI technology companies must intensify those efforts — and invest in communicating to non-technical audiences about them — in order to:
- **Explain what AI can and can’t do — and why.** There’s no need for the public to understand details like the role of attention heads in transformer models. But AI technology companies do need to communicate in plain language the capabilities and limitations of generative models, including what their researchers don’t understand yet and new insights as they emerge.
- **Show how to apply AI effectively and safely.** Although they’re not yet the majority, many organizations and individuals are achieving significant benefits from AI. And data correlating their results with their practices is beginning to reveal what choices they’re making that are making the difference.
## After the storm
Will we see the positive predictions about the potential impact of AI come true — such as Satya Nadella’s about [[The Polarized Predictions About AI#Satya Nadella|economic growth]], Bill Gates’ about [[The Polarized Predictions About AI#Bill Gates|revolutionizing education]], and Demis Hassabis’ about [[The Polarized Predictions About AI#Demis Hassabis|new energy sources]]? Time will tell.
In the meantime, decisions large and small — by stakeholders ranging from governments and corporations to households and individuals — will gradually shape our AI future. If those decisions are rooted in clarity about what AI does and how to apply it well, the positive outcomes most of us would prefer are much more likely to come about.
## Next
To launch into gaining more clarity about AI, see my [[Briefs|Clarity briefs]], which explain key technologies, concepts, and cutting-edge research.
[^1]: [Fei-Fei Li](https://en.wikipedia.org/wiki/Fei-Fei_Li) image source: [ITU Pictures, via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Fei-Fei_Li_at_AI_for_Good_2017.jpg).
[^2]: “[Public and expert predictions for AI’s next 20 years](https://www.pewresearch.org/2025/04/03/public-and-expert-predictions-for-ais-next-20-years/)” (Pew Research Center, 3 April 2025). In addition to the general population, Pew surveyed a smaller cohort of “AI experts” and found their views were more positive but still divided. For Pew’s screening criteria for identifying “AI experts,” see the sidebar “[Who did we define as ‘AI experts’ and how did we identify them?](https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/#who-did-we-define-as-ai-experts-and-how-did-we-identify-them).”
[^3]: “[Concern and excitement about AI](https://www.pewresearch.org/global/2025/10/15/concern-and-excitement-about-ai/)” (Pew Research Center, 15 October 2025).
[^4]: *[The State of AI in 2025: Agents, Innovation, and Transformation](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai),* page 8, exhibit 4, chart 1 (McKinsey & Company, November 2025). Subsequent references: McKinsey 2025.
[^5]: McKinsey 2025, page 4, exhibit 1 shows that reported use of genAI rose from 33% in 2023 to 79% in 2025, while reported use of AI overall rose from 55% to 88% over the same period.
[^6]: McKinsey 2025, page 4, exhibit 1, chart 5.
[^7]: [Satya Nadella](https://en.wikipedia.org/wiki/Satya_Nadella) image source: [Brian Smale and Microsoft, via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Satya_smiling-print.jpg).
[^8]: [Satya Nadella](https://en.wikipedia.org/wiki/Satya_Nadella), quoted in “[North America at Davos 2026: Trump, Carney and a changing world](https://www.weforum.org/stories/2026/01/north-america-at-davos-2026-wef-what-to-know/)” (World Economic Forum, January 2026).
[^9]: McKinsey 2025, page 14: “Respondents who attribute EBIT impact of 5 percent or more to AI use and say their organization has seen “significant” value from AI use — our definition of AI high performers, representing about 6 percent of respondents\[…\].”
[^10]: *[The Widening AI Value Gap](https://media-publications.bcg.com/The-Widening-AI-Value-Gap-Sept-2025.pdf),* page 3 (Boston Consulting Group, September 2025).
[^11]: *[Artificial Intelligence Index Report 2025](https://hai.stanford.edu/ai-index/2025-ai-index-report),* chapter 4, section 4, page 263 (Stanford Institute for Human-Centered Artificial Intelligence, April 2025).
[^12]: *[The GenAI Divide: State of AI in Business 2025](https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf),* page 3 (MIT Media Lab, July 2025).
[^13]: [Daniel Kahneman](https://en.wikipedia.org/wiki/Daniel_Kahneman), *[Thinking, Fast and Slow](https://search.worldcat.org/title/706020998),* chapter 19, “The Illusion of Understanding” (Farrar, Straus and Giroux, 2011).
[^14]: “[How do people around the world feel about the rise of AI in daily life?](https://www.pewresearch.org/global/2025/10/15/concern-and-excitement-about-ai/pg_2025-10-15_ai_2_01/)” (Pew Research Center, 14 October 2025). The percentages I cite for each country are the sums of the three shares Pew labels “more concerned that excited,” equally concerned and excited,” and “more excited than concerned.” (Pew’s chart notes that “those who did not answer are not shown.”) The only countries where that sum is not between 90% and 100% are India, Israel, Nigeria, and Turkey, where the percentages are still high — ranging from 74% to 84%.
[^15]: “[AI awareness around the world](https://www.pewresearch.org/global/2025/10/15/ai-awareness-around-the-world/)” (Pew Research Center, 15 October 2025).
[^16]: *[Accountable Acceleration: Gen AI Fast-Tracks Into the Enterprise](https://ai.wharton.upenn.edu/wp-content/uploads/2025/10/2025-Wharton-GBK-AI-Adoption-Report_Executive-Summary.pdf),* page 16 (Wharton, October 2025). The chart indicates 77% responded “Expert/At Least Somewhat Familiar” to the question “Which best describes your personal knowledge and familiarity with Gen AI?” which implies 23% were less than “somewhat familiar.”
[^17]: “[If Your ROI From AI is Elusive, Your C-Suite Could Be the Problem](https://www.gartner.com/en/articles/how-to-narrow-your-c-suites-ai-skills-gap)” (Gartner, 22 June 2025).
[^18]: “[Gartner Survey Reveals That CEOs Believe Their Executive Teams Lack AI Savviness](https://www.gartner.com/en/newsroom/press-releases/2025-05-06-gartner-survey-reveals-that-ceos-believe-their-executive-teams-lack-ai-savviness)” (Gartner, 6 May 2025).
[^19]: *[Technology & Telecommunications US CEO Outlook](https://kpmg.com/kpmg-us/content/dam/kpmg/pdf/2026/kpmg-technology-telecommunications.pdf)* (KPMG, October 2025).
[^20]: Traci Sitzmann, Katherine Ely, Kenneth G. Brown, Kristina N. Bauer, “[Self-Assessment of Knowledge: A Cognitive Learning or Affective Measure?](https://www.jstor.org/stable/25682447)” (*Academy of Management Learning & Education,* 2010).
[^21]: Leonid Rozenblit, Frank Keil, “[The misunderstood limits of folk science: an illusion of explanatory depth](https://www.sciencedirect.com/science/article/abs/pii/S0364021302000782)” (*Cognitive Science,* 2002).
[^22]: [Alan Kay](https://en.wikipedia.org/wiki/Alan_Kay), “Learning vs. Teaching with Educational Technologies” (*EDUCOM Bulletin,* Fall/Winter 1983, page 17). Attribution verified in “[We Cannot Predict the Future, But We Can Invent It](https://quoteinvestigator.com/2012/09/27/invent-the-future/)” (Quote Investigator, 27 September 2012).
[^23]: [Tristan Harris](https://en.wikipedia.org/wiki/Tristan_Harris) image source: [Stephen McCarthy/Collision via Sportsfile, via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Tristan_Harris_at_Collision_Conf_2018.jpg).
[^24]: [Tristan Harris](https://en.wikipedia.org/wiki/Tristan_Harris), [“Why AI is our ultimate test and greatest invitation”](https://www.ted.com/talks/tristan_harris_why_ai_is_our_ultimate_test_and_greatest_invitation), in the 11:33, 13:51, and 14:35 transcript paragraphs (TED, April 2025).
[^25]: [Dario Amodei](https://en.wikipedia.org/wiki/Dario_Amodei) image source: [TechCrunch via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Dario_Amodei_at_TechCrunch_Disrupt_2023_01_(cropped).jpg).
[^26]: See [[LLMs Don't Run on Facts or Logic#Dario Amodei]].