<p class=cs-document-type>Flagship</p>
# Industry and Society Need AI Clarity<span class=cs-invisible>:</span> <span class=cs-subtitle>The way forward from today’s divided and confused AI landscape</span>
<p class=cs-byline>David Truog</p>
<p class=cs-dateline>11 Mar 2026</p>
<p class=cs-reading-time>14–18 min read</p>
<p class=cs-dek>
Divergence about AI’s impact is causing hesitation, discord, and anxiety. To find our way, we need more clarity about what AI does and how to apply it profitably, innovatively, and responsibly.
</p>
## We’re in a storm of division about AI
Opinions are polarized about how AI will impact economies, organizations, and individuals — ranging from enthusiasm and excitement to deep concern.
> [!research-figure]
> ![[_fig-1-fei-fei-li.jpg|Portrait of Fei-Fei Li]]
>
> Fei-Fei Li [^1]
- **Prominent voices clash.** Thanks to AI, humanity will be “able to take on new challenges \[…\], from curing all diseases to achieving interstellar travel,” Marc Andreessen optimistically predicts. But Elon Musk believes AI poses “a fundamental risk to the existence of human civilization.” Between the utopians and dystopians are the ambivalent and the moderates. Sam Altman sees the rosy scenario as “just so unbelievably good that you sound like a really crazy person to start talking about it” — but he counterpoints that it could instead turn out to be “lights out for all of us.” Meanwhile, Fei-Fei Li is disappointed by “the hyperbole on both sides.” (For context and more influential points of view, see [[The Polarized Predictions About AI]].)
- **Public opinion is split worldwide, too.** In the US, 17% of adults say the impact of AI over the next 20 years will be “positive” whereas 35% say it will be “negative” according to Pew Research Center.[^2] In the middle, 33% say it will be equally both, and 16% are not sure. People’s expectations about the nearer term are divided, too, though leaning more toward the negative: 50% of US adults say they’re “more concerned than excited” about the increased use of AI in daily life, and only 10% are mainly “excited.”[^3] And opinion is split in other countries Pew surveyed as well. Germany and Japan are the countries with the largest proportion who say they’re “equally concerned and excited” (53% and 55%). Israelis and South Koreans are the most “excited” (29% and 22%, respectively), but plenty of them disagree: South Korea has the smallest “concerned” cohort among the 25 countries, but even there, 16% are “concerned.”
### The result: hesitation, discord, anxiety — even dread
Why does this division about AI matter? Because it’s causing:
- **Uncertainty within organizations.** Decision-makers ranging from the executive ranks to functional teams debate whether their AI initiatives in progress are too timid or too ambitious.
- **Dissension about regulation.** Leaders — including technology CEOs, venture capitalists, national and local government officials, economists, and ethicists — argue over whether and how to regulate AI.
- **Doubt among investors.** Institutional and individual stockholders are concerned. Will AI-related shares keep rising or is a bubble about to burst? Will stocks in industries that seem threatened by AI keep gradually sinking, or will they recover?
- **Fear from individuals and families.** Seasoned professionals and freshly minted graduates are anxious about AI’s impact on jobs, as are university students. And parents worry about how AI will shape their children’s futures.
## The structure of the storm
This storm of division about the impact of AI has been:
- **Started and sustained** by the fact that AI is already both helping and hurting individuals and organizations.
- **Intensified** by the reality that although most organizations have AI efforts underway, few are paying off so far.
- **Supercharged** by confusion about AI — insufficient understanding, incorrect beliefs, or both.
### AI’s mixed impacts started and sustain it
People’s divergent views about our AI future aren’t empty speculation. They’re extrapolations from observed impacts of past and current AI on the world today. And those consequences truly are a mixed bag: positive, negative, and sometimes both from a single cause. Spanning each of these three categories, here are just a few examples:
- **AI is delivering tangible benefits.** It helps oncologists diagnose patients, businesses forecast demand, and drivers avoid collisions. It helps knowledge workers summarize long documents and draft emails. It helps creative professionals brainstorm ideas, UX designers quickly create prototypes of apps and websites, and software developers write code.
- **AI is also contributing to serious harms.** It has misguided doctors’ decisions by summarizing clinical notes incorrectly. It’s caused companies to miss out on talented job candidates due to historical bias in training data. And it’s helped election manipulators mislead voters with deepfake videos. Some of the problems are avoidable through mitigation measures — but damage has already been done.
- **Some of AI’s effects are double-edged.** In some cases an impact can be seen either way — as positive or negative. AI has eliminated (or reduced) the need for people to do certain types of work. Some employers have welcomed the opportunity to get rid of a portion of their staff, as a boon to the bottom line. But most people whose jobs have been eliminated see their terminations differently.
### ROI disappointment intensifies it
Among these areas of mixed impact, the one that’s most concerning to business decision-makers has to do with the ROI of AI.
McKinsey found that 88% of the 1,993 respondents to its enterprise AI survey are now using AI in at least one business function, up from 55% just two years prior,[^4] mostly because of a surge in generative AI (genAI) adoption over that time.[^5] And 20% are using it in fully five or more business functions.[^6]
Those energetic investments in AI, however, are now facing headwinds.
> [!research-figure]
> ![[_fig-2-satya-nadella.jpg|Portrait of Satya Nadella]]
>
> Satya Nadella [^7]
- **Some believe AI will fuel economic supergrowth.** Many technology leaders, venture capitalists, and others agree that, in the words of Microsoft CEO Satya Nadella at Davos 2026, AI will “bend the productivity curve and bring local surplus and economic growth all around the world.”[^8]
- **But most enterprises are disappointed so far.** AI’s business impact to date is not encouraging overall, according to leading business consultancies and academic analyses. For example, McKinsey found that only 6% of its respondents are what it calls “high performers” — those who report that their organizations have seen “significant” value from using AI and that at least 5% of their earnings can be attributed to AI.[^9] BCG reports that among AI decision-makers at 1,250 firms it surveyed, only 5% are achieving AI value at scale, and “60% of companies are not achieving material value at all.”[^10] A Stanford study observes “cost reductions and revenue increases \[…\] but most commonly at low levels.”[^11] And an MIT study concluded that “despite $30–40 billion in enterprise investment into genAI, \[…\] 95% of organizations are getting zero return.”[^12]
![[_fig-3-mckinsey-and-mit-data.png|This image shows two charts that visually illustrate the findings from McKinsey and MIT that are also mentioned in the body text of section.]]
### A vortex of confusion supercharges it
Nearly everyone has an *opinion* about AI: 98% in the US, and 90% to 100% in most other countries Pew surveyed.[^13] But as Daniel Kahneman concluded from decades of research, our beliefs are often based on limited — and inaccurate — information:
> You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you \[…\]. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle.[^14]
Clearly, that mismatch between opinion and knowledge is true when it comes to AI, too — for the general population and inside enterprises — since:
- **The public doesn’t know much about AI.** Among US adults asked how much they’ve heard or read about AI, fully 53% say only “a little” or “nothing at all” according to Pew.[^15] Worldwide, the median across 25 countries for the share of people who say they’ve heard or read about AI “a lot” is only 34%. And even among them, having heard or read about AI “a lot” doesn’t guarantee comprehension, since descriptions of AI are often off the mark in popular sources like mainstream media outlets and social platforms.
- **Executives and other employees lack understanding, too.** Among senior decision-makers at US-based large enterprises, 23% are not even “somewhat familiar” with genAI as of 2025, according to a Wharton study.[^16] A Gartner survey, too, suggests that the C-suite’s comfort level with AI is weak: “very low” or “low” for 39%,” and “moderate” for 34%.[^17] Even among CIOs, only 44% are “AI-savvy” according to their own CEOs.[^18] You might think tech companies, in contrast, would have strong AI proficiency — but even in that industry, where KPMG fielded a survey to 230 US CEOs, fully 45% indicated their employees’ technical capability and skills are impeding AI roll-outs.[^19]
- **Even when we believe we understand, we overestimate how well.** Hoping to gauge AI literacy, it’s common for an enterprise to field a survey to its workforce or for a consultancy to survey business professionals industry-wide — but the results are not reliable indicators of understanding. They typically rely on respondents rating their expertise themselves, not on objective testing (such as even simple multiple-choice questionnaires). Research has shown time and again that people tend to over-estimate their competence when self-evaluating. A widely cited meta-analysis of the research into this phenomenon concluded that “self-assessments of knowledge are only moderately related to cognitive learning and are strongly related to affective evaluation outcomes” (specifically motivation and satisfaction).[^20] This is especially true of AI because, as another study found, “people feel they understand complex phenomena with far greater precision, coherence, and depth than they really do.”[^21]
### GenAI worsens the confusion
GenAI was familiar to AI professionals as early as around 2010 but it was not until the 2022 release of ChatGPT that genAI entered the public spotlight, sparking amazement and even wonder — but also confusion and misguided expectations.
What’s stoking bewilderment most now is that genAI’s limitations, such as LLMs’ hallucinations and opacity, are inherent to the technology, not fixable defects — as experts like Dario Amodei, Andrej Karpathy, and Yann LeCun have pointed out. (Read their reflections in [[LLMs Don't Run on Facts or Logic]].)
AI was already hard for most people to understand — genAI made it even harder.
## How to tame the storm
We can’t *escape* the storm — but we can *tame* it.
- **The divergence is about predictions, not preferences.** Given a choice between Andreessen’s prediction that AI will lead to [[The Polarized Predictions About AI#Marc Andreessen|curing all diseases]] and Musk’s that it could mean the [[The Polarized Predictions About AI#Elon Musk|end of civilization]], everyone would favor the former, even Musk.
- **Take action instead of taking sides.** We can make the positive outcomes that we’re near unanimous about more likely by investing less in debating predictions about the future and more in taking action in the present — action that steers toward those positive outcomes.
### Clarity creates agency
> [!research-figure]
> ![[_fig-4-tristan-harris.jpg|Portrait of Tristan Harris]]
>
> Tristan Harris [^22]
The obvious question then is: which actions we take — as individuals and organizations — will steer us toward those positive outcomes? Without answers to that question, we lack agency and therefore remain paralyzed in the face of uncertainty.
To decide, we need clarity. As Tristan Harris pointed out in his talk at TED2025 about the rise of AI: “Clarity creates agency.” He then elaborated:
> Your role in this is not to solve the whole problem. But your role in this is to be part of the collective immune system. […] There is no room of adults working secretly to make sure that this turns out OK. We are the adults. We have to be.[^23]
And what exactly do we need clarity about? Two domains: what AI does; and how to apply it well.
### Clarity on what AI does
Most people drive cars just fine without understanding what’s going on under the hood. Similarly, people can use AI well without knowing how it *works.* But you do need to have accurate expectations of how a technology will *behave* when you operate it — such as how a car’s braking distance depends on its speed. The same is true of AI.
So it’s no wonder that when McKinsey analyzed which practices most strongly differentiate the organizations that reported the highest returns on AI, “AI upskilling” and an “AI talent strategy” (that includes onboarding) were among them.[^24]
Clarity is especially needed about genAI, because it exhibits behaviors that are a struggle to understand intuitively. For example, consider genAI confabulation, reasoning, and opacity, about which there’s widespread misunderstanding.
- **Confabulation (“hallucination”) isn’t a malfunction — it’s inherent to LLMs.** Language models are designed to produce plausible-seeming outputs as continuations of incoming prompts; they don’t retrieve or verify facts. Occasional (and unintentional) confabulation is an inevitable consequence of that probabilistic process — not a malfunction or defect in the model. That means inaccuracies in LLMs’ outputs are to be expected from time to time. A truth-checking mechanism *outside* the model could theoretically filter out inaccuracies — but never prevent the model from generating them in the first place.
- **GenAI can’t reason logically — it only emulates logic’s *language patterns.*** LLM outputs can *appear* to involve logic. Why? Because the massive quantities of text used to train them are authored mostly by people — and some of those people thought logically as they were writing and expressed their logic in language. As a result, LLMs can generate text that resembles humans’ often logic-grounded writing, sprinkled with “ifs,” “thens,” and “therefores,” and juxtaposing premises and conclusions that seem related, even if not always logically. But LLMs don’t perform inference based on the rules of logic. That means their logical-ish explanations can contain brittle reasoning and subtle inconsistencies, and therefore wrong conclusions.
- **Neural networks are well understood — but unfathomable at large scale.** The underlying math that makes a neural network do what it does is well documented and conceptually straightforward. But it’s typical for a neural network in an LLM to contain over a hundred layers, and then, after training, to have sprouted billions — or even trillions — of parameters, at which point its sheer scale makes it effectively opaque. In principle, yes, it’s possible to trace the computations that led to a particular output. But when billions of components interact, the resulting “trace” is far beyond what even experts can wrap their minds around. The difficulty lies not in observing the process, but in making sense of it.
There are efforts underway to address each of these challenges to some degree — for example, by designing hybrid systems that use neurosymbolic approaches and conducting research into what has become known as mechanistic interpretability. But we’re still in the very early stages of those initiatives.
**For clarity on what AI does, see:**
> [[Briefs#Forthcoming briefs|AI Literacy Strategy (forthcoming)]]
> How to decide what’s essential to know about AI — and learn it
### Clarity on how to apply AI well
For some categories of technology, enterprise decision-makers can tap into decades of well-honed best practices. But with AI, outcomes are more context-dependent and capabilities evolve quickly, so iterative experimentation is especially valuable.
Some early patterns are apparent in publicly available data based on relatively transparent methods, from academic institutions such as MIT and Stanford and from firms such as BCG and McKinsey. Surveys of organizations about their AI-related practices are particularly useful for identifying which ones predict reported positive financial impact — helping distinguish companies that are getting meaningful value from those that aren’t (yet).
Although these studies don’t prove causality and the practices they identify should not be treated as recipes, they do provide useful guidance. They can help organizations prioritize hypotheses and investments to 1) increase the odds of yielding value and 2) reduce known failure modes that arise from the nature of AI technologies. They’re most useful when they account for overlapping practices (for example via relative-weights methods) and consider timing and maturity.
Based on my examination of the patterns emerging from these studies, I see the most important questions enterprises must answer as clustering into five areas:
- **Adapt vision and strategy.** What future state is leadership aiming for — and what outcomes will AI drive — growth, cost, speed, quality, or new offerings? What choices are leaders making about where to focus to build differentiation that persists, and — crucially — how not to focus on AI capabilities likely to become commoditized?
- **Reshape tasks and processes.** Which aspects of the work a company does can benefit from AI rather than traditional automation, or in addition to it? Which tasks can be fully automated, and which require human involvement — spot checks, active supervision, or mandatory approval — with clear design of who intervenes, when, and with what authority? Which tasks are open-ended or judgment-heavy, where success is uncertain and there may be no single correct answer? What criteria should govern these choices — such as consequences of failure, reversibility, and measurability?
- **Evolve operating patterns.** How should an organization’s operating model change to build and run AI systems responsibly at scale — in terms of ownership, evaluation practices, release cadence, monitoring, and incident response? How should AI initiatives be coordinated across teams — in terms of decision rights, shared platforms, and standards — to create value without leading to uncontrolled risk?
- **Refactor technology assets.** How should companies create (or source), organize, and maintain the technical foundations for AI systems — retrieval corpora and indexes, reusable prompt-and-workflow components, guardrails, and the supporting tooling for evaluation and observability — so they can be reused safely, governed, and improved over time?
- **Develop behavioral literacy.** What do leaders and teams need to understand about how AI systems behave — what they’re good at, where they fail, and what it takes to apply them effectively? This literacy is crucial: decisions about strategy, workflow redesign, and governance tend to be sloppy in the absence of accurate mental models.
**For clarity on how to apply AI well, see:**
> [[Briefs#Forthcoming briefs|AI Application Strategy (forthcoming)]]
> How to decide what tasks to use AI for — profitably and responsibly
> [[Briefs#Forthcoming briefs|The State of Enterprise AI (forthcoming)]]
> What differentiates organizations getting ROI from AI today
## After the storm
Will we see the positive predictions about the potential impact of AI come true — such as Satya Nadella’s about [[The Polarized Predictions About AI#Satya Nadella|economic growth]], Bill Gates’ about [[The Polarized Predictions About AI#Bill Gates|revolutionizing education]], and Demis Hassabis’ about [[The Polarized Predictions About AI#Demis Hassabis|new energy sources]]? Time will tell.
In the meantime, decisions large and small — by stakeholders ranging from governments and corporations to households and individuals — will gradually shape our AI future. If those decisions are rooted in clarity about what AI does and how to apply it well, the positive outcomes everyone would prefer are much more likely to come about.
[^1]: [Fei-Fei Li](https://en.wikipedia.org/wiki/Fei-Fei_Li) image source: [ITU Pictures, via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Fei-Fei_Li_at_AI_for_Good_2017.jpg).
[^2]: “[Public and expert predictions for AI’s next 20 years](https://www.pewresearch.org/2025/04/03/public-and-expert-predictions-for-ais-next-20-years/)” (Pew Research Center, 3 April 2025). In addition to the general population, Pew surveyed a smaller cohort of “AI experts” and found their views were more positive but still divided. For Pew’s screening criteria for identifying “AI experts,” see the sidebar “[Who did we define as ‘AI experts’ and how did we identify them?](https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/#who-did-we-define-as-ai-experts-and-how-did-we-identify-them).”
[^3]: “[Concern and excitement about AI](https://www.pewresearch.org/global/2025/10/15/concern-and-excitement-about-ai/)” (Pew Research Center, 15 October 2025).
[^4]: *[The State of AI in 2025: Agents, Innovation, and Transformation](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai),* page 8, exhibit 4, chart 1 (McKinsey & Company, November 2025). Subsequent references: McKinsey 2025.
[^5]: McKinsey 2025, page 4, exhibit 1 shows that reported use of genAI rose from 33% in 2023 to 79% in 2025, while reported use of AI overall rose from 55% to 88% over the same period.
[^6]: McKinsey 2025, page 4, exhibit 1, chart 5.
[^7]: [Satya Nadella](https://en.wikipedia.org/wiki/Satya_Nadella) image source: [Brian Smale and Microsoft, via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Satya_smiling-print.jpg).
[^8]: [Satya Nadella](https://en.wikipedia.org/wiki/Satya_Nadella), quoted in “[North America at Davos 2026: Trump, Carney and a changing world](https://www.weforum.org/stories/2026/01/north-america-at-davos-2026-wef-what-to-know/)” (World Economic Forum, January 2026).
[^9]: McKinsey 2025, page 14: “Respondents who attribute EBIT impact of 5 percent or more to AI use and say their organization has seen “significant” value from AI use—our definition of AI high performers, representing about 6 percent of respondents\[…\].”
[^10]: *[The Widening AI Value Gap](https://media-publications.bcg.com/The-Widening-AI-Value-Gap-Sept-2025.pdf),* page 3 (Boston Consulting Group, September 2025).
[^11]: *[Artificial Intelligence Index Report 2025](https://hai.stanford.edu/ai-index/2025-ai-index-report),* chapter 4, section 4, page 263 (Stanford Institute for Human-Centered Artificial Intelligence, April 2025).
[^12]: *[The GenAI Divide: State of AI in Business 2025](https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf),* page 3 (MIT Media Lab, July 2025).
[^13]: “[How do people around the world feel about the rise of AI in daily life?](https://www.pewresearch.org/global/2025/10/15/concern-and-excitement-about-ai/pg_2025-10-15_ai_2_01/)” (Pew Research Center, 14 October 2025). The percentages I cite for each country are the sums of the three shares Pew labels “more concerned that excited,” equally concerned and excited,” and “more excited than concerned.” (Pew’s chart notes that “those who did not answer are not shown.”) The only countries where that sum is not between 90% and 100% are India, Israel, Nigeria, and Turkey, where the percentages are still high — ranging from 74% to 84%.
[^14]: [Daniel Kahneman](https://en.wikipedia.org/wiki/Daniel_Kahneman), *[Thinking, Fast and Slow](https://search.worldcat.org/title/706020998),* chapter 19, “The Illusion of Understanding” (Farrar, Straus and Giroux, 2011).
[^15]: “[AI awareness around the world](https://www.pewresearch.org/global/2025/10/15/ai-awareness-around-the-world/)” (Pew Research Center, 15 October 2025).
[^16]: *[Accountable Acceleration: Gen AI Fast-Tracks Into the Enterprise](https://ai.wharton.upenn.edu/wp-content/uploads/2025/10/2025-Wharton-GBK-AI-Adoption-Report_Executive-Summary.pdf),* page 16 (Wharton, October 2025). The chart indicates 77% responded “Expert/At Least Somewhat Familiar” to the question “Which best describes your personal knowledge and familiarity with Gen AI?” which implies 23% were less than “somewhat familiar.”
[^17]: “[If Your ROI From AI is Elusive, Your C-Suite Could Be the Problem](https://www.gartner.com/en/articles/how-to-narrow-your-c-suites-ai-skills-gap)” (Gartner, 22 June 2025).
[^18]: “[Gartner Survey Reveals That CEOs Believe Their Executive Teams Lack AI Savviness](https://www.gartner.com/en/newsroom/press-releases/2025-05-06-gartner-survey-reveals-that-ceos-believe-their-executive-teams-lack-ai-savviness)” (Gartner, 6 May 2025).
[^19]: *[Technology & Telecommunications US CEO Outlook](https://kpmg.com/kpmg-us/content/dam/kpmg/pdf/2026/kpmg-technology-telecommunications.pdf)* (KPMG, October 2025).
[^20]: Traci Sitzmann, Katherine Ely, Kenneth G. Brown, Kristina N. Bauer, “[Self-Assessment of Knowledge: A Cognitive Learning or Affective Measure?](https://www.jstor.org/stable/25682447)” (*Academy of Management Learning & Education,* 2010).
[^21]: Leonid Rozenblit, Frank Keil, “[The misunderstood limits of folk science: an illusion of explanatory depth](https://www.sciencedirect.com/science/article/abs/pii/S0364021302000782)” (*Cognitive Science,* 2002).
[^22]: [Tristan Harris](https://en.wikipedia.org/wiki/Tristan_Harris) image source: [Stephen McCarthy/Collision via Sportsfile, via Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Tristan_Harris_at_Collision_Conf_2018.jpg).
[^23]: [Tristan Harris](https://en.wikipedia.org/wiki/Tristan_Harris), [“Why AI is our ultimate test and greatest invitation”](https://www.ted.com/talks/tristan_harris_why_ai_is_our_ultimate_test_and_greatest_invitation) (TED, April 2025).
[^24]: McKinsey 2025, page 20, exhibit 14 shows two panels containing ranked lists of practices: “highest prevalence” and “relative importance.” Although both panels show each practice’s prevalence percentages, the second panel is especially relevant since it ranks practices based on how differentiating each practice is rather than simply how common it is. (To rank the practices, McKinsey performed a relative weights analysis — see footnote 1 in the exhibit.)