# Research Briefs
----
## Overview
| | |
| ----------------------------------------------------------------------------------- | ------------------------------------------------------ |
| **[[#Flagship brief my perspective on the AI landscape\|Flagship]]** | **My perspective on the AI landscape** |
| **[[#Analysis briefs evidence dissected and interpreted\|Analysis]]** | **Evidence dissected and interpreted** |
| **[[#Clarity briefs key technologies, concepts, and research explained\|Clarity]]** | **Key technologies, concepts, and research explained** |
| **[[#Spark briefs food for thought, some of it surprising\|Sparks]]** | **Food for thought, some of it surprising** |
| **[[#Conjecture briefs grounded speculations\|Conjectures]]** | **Grounded speculations** |
| **[[#Source briefs external material other briefs refer to\|Sources]]** | **External material other briefs refer to** |
### Forthcoming briefs
To hear about new research when it goes live, see [[Subscribe|Research > Subscribe]].
---
## Flagship brief: my perspective on the AI landscape
> **[[Industry and Society Need AI Clarity|Industry and Society Need AI Clarity]]**
> **The opportunity in today’s confused and divided AI landscape**
---
## Analysis briefs: evidence dissected and interpreted
> [[Briefs#Forthcoming briefs|Coding Agent Adoption (forthcoming)]]
> Quantifying the rapid spread of agentic coding
> [[Briefs#Forthcoming briefs|Coding Agent ROI (forthcoming)]]
> The benefits of coding agents are uneven — so far
> [[Briefs#Forthcoming briefs|Coding and Agents Dovetail (forthcoming)]]
> The nature of software development makes it a strong fit for agentic AI
> [[Briefs#Forthcoming briefs|Agentic Coding’s Impact on Agentic AI (forthcoming)]]
> Enterprises will reshape other kinds of work to more closely resemble coding
> [[Briefs#Forthcoming briefs|The Future of AI MechInterp (forthcoming)]]
> How mechanistic interpretability’s early successes will play out
---
## Clarity briefs: key technologies, concepts, and research explained
> **[[Why Care About Clarity on AI|Why Care About Clarity on AI]]**
> **Better understanding of AI is key to steering it toward positive outcomes**
> [[Briefs#Forthcoming briefs|Clarity on AI Assistants (forthcoming)]]
> Understand how ChatGPT, Claude, and others do what they do
> [[Briefs#Forthcoming briefs|Clarity on AI Model Training (forthcoming)]]
> Understand what, how, and when AI models learn
> [[Briefs#Forthcoming briefs|Clarity on AI Interpretability (forthcoming)]]
> Peering into how Al models seem to think — and why it matters
> [[Briefs#Forthcoming briefs|Clarity on AI Safety (forthcoming)]]
> Ensuring AI acts in accordance with human goals and values
> [[Briefs#Forthcoming briefs|Clarity on AI Alignment (forthcoming)]]
> Ensuring AI acts in accordance with human goals and values
> [[Briefs#Forthcoming briefs|Clarity on AI Autonomy (forthcoming)]]
> When to let AI systems act autonomously — and for how long
> [[Briefs#Forthcoming briefs|Clarity on AI Reasoning (forthcoming)]]
> The various ways AI systems seem to reason — and when to trust them
> [[Briefs#Forthcoming briefs|Clarity on AI Epistemology (forthcoming)]]
> What AI systems seem to believe and why — and when to trust them
> [[Briefs#Forthcoming briefs|Clarity on AI Functionalism (forthcoming)]]
> AI consciousness, why it mostly doesn’t matter — and when it might
> [[Briefs#Forthcoming briefs|Clarity on AI Model Welfare (forthcoming)]]
> Considering whether AI systems experience anything — and the implications
---
## Spark briefs: food for thought, some of it surprising
> **[[From Shakespeare to AI|From Shakespeare to AI]]**
> **How LLMs’ inner workings are rooted in comparative literature**
> [[Briefs#Forthcoming briefs|Transparent Black Boxes (forthcoming)]]
> Neural networks are well understood — just unfathomable at large scale
> [[Briefs#Forthcoming briefs|The Autocomplete Fallacy (forthcoming)]]
> Why predicting what comes next is more powerful than it seems it should be
> [[Briefs#Forthcoming briefs|We Are ChatGPT (forthcoming)]]
> Who you’re talking with when you talk with an LLM-powered AI assistant
> [[Briefs#Forthcoming briefs|Hallucination Is Forever (forthcoming)]]
> Inaccurate statements from language models can be contained but never prevented
> [[Briefs#Forthcoming briefs|AI Isn’t Biased (forthcoming)]]
> Bias is caused by choices of training data — it’s not in AI algorithms
> [[Briefs#Forthcoming briefs|When Your Mind Autocompletes (forthcoming)]]
> Essentials of large language models explained with a thought experiment
---
## Conjecture briefs: grounded speculations
> [[Briefs#Forthcoming briefs|The Fate of Work (forthcoming)]]
> What people will do with their time if AI obviates the need for human work
> [[Briefs#Forthcoming briefs|Animal AI (forthcoming)]]
> When a machine emulates the cognition of an octopus
> [[Briefs#Forthcoming briefs|When AI-Free Means Luxury (forthcoming)]]
> Some “human-made” work will become more valuable once AI is everywhere
---
## Source briefs: external material other briefs refer to
> **[[The Polarized Predictions About AI|The Polarized Predictions About AI]]**
> **Influential viewpoints range from utopian to dystopian**
> **[[LLMs Don't Run on Facts or Logic|LLMs Don't Run on Facts or Logic]]**
> **Leading AI researchers explain the limitations of large language models**
> [[Briefs#Forthcoming briefs|The Many Definitions of AI (forthcoming)]]
> There’s Widespread Disagreement About What “AI” Means
> [[Briefs#Forthcoming briefs|The Many Definitions of AI Agents (forthcoming)]]
> There’s Widespread Disagreement About What “AI Agent” Means
---
## Suggestions?
If you’d like to suggest an AI-related question, topic, or tension you think deserves a research brief, [[Contact|reach out]].
## About my research
Learn more about:
- My research [[Methods|methods]]
- The [[Scope|scope]] of my research
- [[Research/Sharing|Sharing]] (or otherwise reusing) my research
- How I use — and don’t use — [[Tools|AI as a tool]] in my research process
----