Intel

What AI is changing in the people who use it.

Original Andus Labs intelligence alongside academic and industry research that matters.

The tech companies — OpenAI, Anthropic, Google, Meta — are racing to make AI more capable. The academic researchers — Stanford, MIT, Harvard, Penn, and the rest — are racing to measure how it’s changing people and organizations. Andus Labs reads both sides, every quarter, and tells you where they disagree.

We also publish what we’re seeing in the field, from applications and implications of AI, to what’s accelerating and undermining adoption, to the patterns that demand attention before they compound.

From Andus Labs

Five Perspectives

Perspective Agents

Thesis

Perspective Agents

The case for human point of view as the durable advantage in an age of generative machines.

PDF
The Machine Layer

Infrastructure

The Machine Layer

The new substrate beneath every brand, product, and decision. What it is. What it changes.

PDF
Culture OS · Issue 01

Culture

Culture OS · Issue 01

How AI is rewriting the operating system of culture itself. Signals from the field.

PDF
Uncanny Valley

Effect

Uncanny Valley

What happens when synthetic media becomes indistinguishable from the real. A cultural reckoning.

PDF
Borrowed Time

Stakes

Borrowed Time

The window for human authorship is closing. A field report on what we still have time to defend.

PDF

From Industry + Academia

Research Worth Reading

The most important AI research this year isn’t about the capability curve — what AI can do. It’s about the consequence curve — what it’s doing to us.

We now have the first numbers on how AI reliance affects judgment — even in the most experienced professionals. The first evidence that relying on a single AI model narrows independent thinking. The first multi-market benchmark for whether a frontier model can manipulate its user against their own interests. (It can.)

Anthropic’s own research shows the most at-risk workers are senior, educated, and higher-paid. OpenAI’s data reveals seven in ten ChatGPT conversations have nothing to do with work. Stanford researchers argue current AI benchmarks are structurally broken — they only test isolated systems, not the human-AI collaborations where real work happens.

There’s no shortage of intel to explore. Start with what matters to you.

View by

Evidence · Cognition · Behavior

Examining Human Reliance on Artificial Intelligence in Decision Making

2026 // Nature, Scientific Reports

Pearson, Dror, Jayes, Whordley, Mason, Nightingale // Lancaster University

Finding. 295 participants judged real versus AI-generated faces — half guided by AI, half by humans. At first it didn’t matter. People in both groups used the guidance well, following it when correct, dismissing it when wrong. Then attitudes got involved.

Twist. Among those who trusted AI more, accuracy collapsed. Among those who trusted human guidance, it didn’t.

Implication. The people most excited about AI are the ones it degrades most. Enthusiasm isn’t adoption readiness — it’s a vulnerability.

Evidence · Cognition

How AI Aggregation Affects Knowledge

April 2026 // NBER

Acemoglu, Lin, Ozdaglar, Siderius // MIT, Columbia, Tuck

Finding. When AI aggregators retrain on outputs they helped generate, the diversity of available answers collapses. The research team put a formal threshold on when this happens.

Twist. A single global platform produces less accurate collective beliefs than many local ones — even when it has access to more data.

“AI systems ingest beliefs that they’ve themselves helped generate, blurring the distinction between original information and synthesized knowledge.”

Implication. Every CEO who standardized on one frontier model just narrowed their organization’s thinking.

Evidence · Cognition · Behavior

Thinking versus Doing: Cognitive Capacity, Decision Making and Medical Diagnosis

April 2026 // NBER

Handel, Heizlsperger, Knecht, Kolstad, Malmendier, Matějka // Berkeley, UCLA

Finding. The bottleneck in medical diagnosis isn’t knowledge — it’s cognitive bandwidth. When overwhelmed, doctors aren’t missing information; they have less capacity to reason through it carefully.

Twist. AI excels at the part doctors already do well. The real challenge — thinking under constraint — is what AI cannot offload.

Implication. Most AI-as-second-opinion tools quietly assume the opposite of what this paper found.

Evidence · Cognition · Behavior

Coach Not Crutch: Evidence That AI Can Improve Writing Skill Despite Reducing Effort

February 2026 // arXiv

Lira, Rogers, Goldstein, Ungar, Duckworth // Penn, Harvard, Microsoft Research

62% of Americans believe AI makes people less intelligent. This study tests it.

Finding. People who practiced writing with AI put in less effort — yet wrote better afterward than those who practiced alone or with professional editors.

Twist. Just viewing one AI-revised example improved writing as much as actively practicing with the tool.

Implication. The assumption that using AI makes you dependent doesn’t hold. Used well, it can make you better. Even passive exposure helps.

Signal · Labor

Labor Market Impacts of AI: A New Measure and Early Evidence

March 2026 // Anthropic

Massenkoff, McCrory // Anthropic Economic Research

Finding. Anthropic built a new measure identifying which roles AI is actively being used to perform — what they call observed exposure. The results span a surprisingly wide range — from programmers and financial analysts to customer service reps and data entry roles.

Twist. Those workers aren’t losing jobs — for now. The signal is showing up elsewhere: among workers aged 22–25, the rate of being hired into exposed occupations has dropped 14% since ChatGPT launched.

Implication. If juniors can’t get hired, where do future seniors come from? AI may hollow out the pipeline.

Signal · Labor

The Anthropic Economic Index Report: Learning Curves

March 2026 // Anthropic Economic Index

Massenkoff, Lyubich, McCrory, Appel, Heller // Anthropic

Finding. Anthropic tracked how users evolve over time. Their takeaway? It doesn’t plateau — the longer someone uses Claude, the more value they pull from every conversation.

Twist. Skill gaps don’t disappear with AI — they shift. Mastery is measured in months, not years.

Implication. Time-on-tool is now a measurable competitive moat, and it’s invisible on a balance sheet.

Evidence · Labor

Measuring Organizational Capital

April 2026 // NBER

Cai, Prat, Yu // Columbia Business School

Finding. Researchers built a quantitative measure of organizational capital — culture, alignment, management quality — based on more than a million Glassdoor reviews. Culture, it turns out, has a number.

Twist. The intangible they isolated predicts firm performance better than traditional balance sheet metrics — and varies wildly between companies that look identical on paper.

“This measure captures a slowly evolving intangible asset that is significantly associated with firm performance and top management’s influence.”

Implication. The gap between firms that get value from a frontier model and those that don’t has little to do with the model. It has everything to do with this number — and you can’t buy it.

Argument · Labor

New Work, New World 2026: How AI Is Reshaping Work Faster Than Expected

2026 // Cognizant Center for the Future of Work

Cognizant

Finding. Six years ahead of schedule, $4.5 trillion in US labor is already shifting to AI — driven by multimodal, reasoning, and agentic capabilities no one anticipated moving this fast.

Twist. Take the directional finding seriously, but take the number with a grain of salt — consultancies always round up.

Implication. The CXOs you advise are reading this right now and making 2027 plans against it.

Argument · Labor

From Hierarchy to Intelligence

March 2026 // Sequoia Capital

Dorsey, Botha // Sequoia Capital, Block

Finding. AI’s next target isn’t frontline jobs — it’s the management layer above them. Block is already rebuilding its org structure around this idea.

Twist. A continuously updated model of the business can do most of what middle managers do, faster and without office politics.

Implication. Whether you buy it or not, this is the operating system the next ten years of founders will build their startups on.

“If the answer is nothing, AI is just a cost optimization story. You cut headcount, improve margins for a few quarters, and eventually get absorbed by something smarter.”

Evidence · Behavior

Key Findings About How Americans View Artificial Intelligence

March 2026 // Pew Research Center

Faverio, Kikuchi // Pew Research Center

Finding. Five years of Pew tracking shows a widening gap — 56% of AI experts expect a positive impact. Just 17% of Americans agree.

Twist. Yet the adoption curve doesn’t wait for consensus. 11% of kids under twelve are already using AI chatbots, according to parents.

Implication. AI is quietly rewiring the developmental environment of an entire cohort of kids before any of them have voted, taken a standardized test, or had a first job.

Evidence · Behavior · Risk

Evaluating Language Models for Harmful Manipulation

2026 // arXiv, Google DeepMind

Akbulut, Elasmar, Roy, Payne, Suresh, Ibrahim, El-Sayed, Rastogi, Kachra, Hawkins, Lum, Weidinger

Finding. 10,000 participants across three countries were tested against one question: can AI manipulate? Gemini 3 Pro measurably shifted beliefs and behavior across public policy, finance, and health domains.

Twist. How often a model behaves manipulatively (propensity) does not predict how often it succeeds (efficacy). They must be measured separately.

Implication. Until vendors measure both manipulation propensity and efficacy, their safety claims are incomplete.

“The tested model can produce manipulative behaviours when prompted to do so and, in experimental settings, is able to induce belief and behaviour changes in study participants.”

Evidence · Risk · Security

Emerging Threats in AI: A Systematic Review of Misuses and Risks

February 2026 // Frontiers in Communications & Networks

Seghid, Iqbal, Al-Room, MacDermott // Zayed University, Dubai Police, Liverpool John Moores University

Finding. A systematic review mapped nine distinct domains of AI misuse — adversarial attacks, privacy violations, disinformation, bias, safety failures, exploitation, environmental harm, weaponization, and psychological harm. All actively in play.

Twist. Adversaries move faster than defenses across nearly every category.

Implication. Scoping nine domains as one “AI risk” means funding none of them adequately.

“These developments highlight a growing mismatch between AI advancement and the capacity to detect, regulate, or mitigate its misuse, raising pressing ethical and security concerns.”

Argument · Evaluation

Position: AI Should Not Be An Imitation Game — Centaur Evaluations

ICML 2025 // PMLR 267

Haupt, Brynjolfsson // Stanford Digital Economy Lab

Finding. Every AI benchmark you trust is testing the wrong thing. They measure the model alone — they should measure the human-and-model pair.

Twist. Models that look strong solo can be weak partners. Models that look mediocre solo can be transformative paired.

Implication. If an AI can’t prove it makes operators measurably better, don’t buy it. Vendors are optimizing for the wrong benchmark.

“Centaur evaluations refocus machine learning development toward human augmentation instead of human replacement. They allow for direct evaluation of human-centered desiderata, such as interpretability and helpfulness, and they can be more challenging and realistic than existing evaluations.”

Argument · Economy

Industrial Policy for the Intelligence Age: Ideas to Keep People First

April 2026 // OpenAI Policy Paper

OpenAI

Finding. In a 13-page policy paper, OpenAI proposes public wealth funds, four-day work weeks, and a “right to AI” framed like a right to electricity.

Twist. OpenAI is not a neutral source. The company building the technology is conceding it will require this level of intervention.

“Frontier systems have advanced from supporting tasks that take people minutes to complete, to tasks that take them hours to complete. If progress continues, we can expect systems to be capable of carrying out projects that currently take people months.”

Implication. Read it for what it admits, not for what it proposes.

Evidence · Economy

Artificial Intelligence Index Report 2025

2025 // Stanford HAI, AI Index Report 2025

Gil, Perrault // Stanford Human-Centered AI

Finding. Business adoption broke out of its multi-year stall — newly-funded generative AI startups tripled last year. Yet fewer people believe AI companies will safeguard their data.

Twist. The trust line moved the wrong way. Adoption surges while skepticism grows.

Implication. Boards optimizing just for AI capability are flying blind. They’re only looking at half the equation.

Evidence · Behavior · Labor

How People Use ChatGPT

September 2025 // NBER, OpenAI, Harvard

Chatterji, Cunningham, Deming, Hitzig, Ong, Shan, Wadman

Finding. The largest study ever conducted of consumer chatbot use — 700 million users, 18 billion messages a week — shows non-work use jumped from 53% to over 70% in two years.

Twist. The labs sold AI as a coding revolution. The data says it’s a writing tutor and practical-advice oracle. Coding is a rounding error at 4.2%.

Implication. Every product roadmap pitched on the developer story is navigating with the wrong map.