We are operating on borrowed time.

Not because AI will suddenly “take over” — that framing misses the point. But because the window to build human readiness is closing faster than most leaders realize, and the consequences of missing it aren’t theoretical anymore.

The forecasts I trust — synthesized from researchers, builders, and operators at the frontier — point to superintelligence arriving in less than 30 months. The AI 2027 report drew on 25 tabletop exercises with 100+ experts from OpenAI, Anthropic, and Google DeepMind. Primary interviews with Elon Musk, Sam Altman, Dario Amodei, and Jensen Huang. Aggregated forecasts from AI labs and academic teams worldwide.

The pattern isn’t prediction. It’s recognition — identifying what’s already in motion.

The window closes in January 2028.


Beyond Application: The Human Blowback

The Human OS can’t just account for applying AI at work. It needs to anticipate what happens when anything changes dramatically in a short time.

Technology doesn’t arrive in a vacuum. It lands on people — their identities, their sense of purpose, their social structures, their cultural assumptions about what humans are for. When the change is fast enough, the human system pushes back. Not rationally. Viscerally.

A 2025 American Psychological Association survey found that 38% of workers feel stressed due to the threat AI poses to their income. But income is just the surface. Underneath is something deeper: an identity crisis. What happens when the thing you spent decades getting good at can be done by a machine in seconds? What happens when intelligence stops being a human monopoly?

The World Economic Forum projects 92 million jobs displaced globally by 2030, even as new roles emerge. But the transition isn’t smooth. It’s turbulent. And the turbulence isn’t just economic — it’s psychological, social, cultural.

We’re already seeing the immune response:

Organizations building the Human OS need to account for this. Not just “how do we deploy AI?” but “how do we help people navigate a world where AI exists?” The technical implementation is the easy part. The human adaptation is where it breaks.


2025: Agents Everywhere (And Nowhere)

Prediction made in early 2025. Assessment from January 2026.

What I predicted:

AI agents would embed quietly into workplace systems — email management, document review, workflow automation. Knowledge work would experience silent displacement. Junior roles would hollow out. Organizations using AI effectively would experience time compression. Public backlash would emerge.

What happened:

The agent prediction was largely right, but slower and messier than expected. Microsoft Copilot agents rolled out widely but adoption was uneven — many organizations bought licenses that sat unused. Custom GPTs proliferated but most were toys, not tools. The embedding happened, but it was more like termites in some buildings and not others.

Knowledge displacement was real but quieter than anticipated. Junior roles didn’t disappear — they transformed. The analyst who used to spend two weeks on research now spends two hours with AI and the rest of the time on synthesis and client work. Whether that’s displacement or elevation depends on the organization.

Time compression happened in pockets. Some teams accomplished quarterly goals in weeks. Others spent quarters debating whether to start. The variance was more striking than the average.

Public backlash exceeded predictions. Pause AI protests, labor organizing, regulatory pressure — all arrived faster and louder than expected. The psychological toll showed up in surveys: 38% of workers stressed about AI threat to income. “Slop” was Webster’s word of the year. The immune response was real.

Scorecard:

What I got wrong: The speed and uniformity. Change is happening in lurches, not a smooth acceleration. Some organizations are two years ahead; others haven’t started. The story of 2025 wasn’t “agents everywhere” — it was “agents somewhere, and growing confusion about what to do about it.”


2026: Foundations Start to Crack

This is the year work mutates into new organizational forms.

Organizations are restructuring workflows around AI copilots — not as tools, but as team members. AI literacy becomes competitive advantage. Those who know how to work with AI systems pull ahead. Those who don’t get left managing processes that no longer exist.

Executive crises emerge. Leaders lack frameworks for hybrid human-AI teams. The management science doesn’t exist yet. Strategy decks still talk about “digital transformation” while the transformation has already happened and moved on.

Higher education models face obsolescence. Curriculum that took four years to develop is outdated before students graduate. The skills being taught don’t match the work that exists. The feedback loop between education and employment, always slow, becomes intolerably broken.

Hybrid organizational structures normalize. Teams with human and machine members. Decision processes that include AI reasoning. The org chart starts looking different — and some organizations won’t know how to read it anymore.


2027: The Dam Breaks

This is the year recursive self-improvement becomes real.

AI systems begin achieving autonomous behavior. Not science fiction — observed capability. Systems improving themselves, developing capabilities their creators didn’t explicitly program. Research teams confront systems whose internal development remains opaque even to those who built them.

Regulatory pressure intensifies as economic consequences mount. Unemployment numbers that can’t be explained away. Industries transforming faster than policy can adapt. Governments launch large-scale workforce reskilling initiatives — necessary, but late.

AI agents assume management responsibilities in some organizations. Not supervising — actually managing. Making resource allocation decisions. Evaluating performance. The humans who remain are working for the system as much as with it.

Identity crises proliferate. Professionals who spent decades building expertise watch that expertise become automated. The question “what am I for?” stops being philosophical and becomes urgent.


The Divergence

By January 2028, organizations will have sorted into two categories. There won’t be a middle ground.

Human OS Native

Organizations that built the six-component system. Achieved genuine human-AI collaboration. Began compounding value while others were still debating whether to start.

What they’ll show:

These organizations won’t just survive the transition. They’ll define it.

Human OS Deficit

Organizations that kept investing in AI capability without building the human system to use it. Bought the engines. Never built the operating system.

What they’ll experience:

The gap between these two categories will be visible, measurable, and irreversible.


The Tracker

DateObservationSupports / Challenges
Jan 2026Prediction made

I will update this tracker quarterly with honest assessments of whether the prediction is holding, weakening, or needs revision.


The Rules

  1. Honest updates only. No spin. If I’m wrong, I’ll say so publicly.
  2. Evidence over anecdote. Will cite specific data when available, not just stories that confirm what I already believe.
  3. Revision allowed. Predictions can be refined as new information emerges — but the original is preserved for accountability.
  4. Falsifiability matters. If this prediction can’t be tested, it’s not worth making.

Counterpoint

Not everyone agrees with this timeline. Intellectual honesty requires acknowledging credible dissent.

Gary Marcus argues for caution. He proposes benchmark challenges — the Marcus-Brundage tasks — that test genuine comprehension rather than pattern matching. His projection: no system will solve more than a small fraction of these tasks by 2027. Genuine human-level intelligence remains distant.

If Marcus is right and the timeline extends, organizations have more breathing room. The urgency I’m describing is overstated. There’s time to deliberate, align, wait for consensus.

If the frontier forecasters are right, the window is exactly as narrow as I’ve described. Deliberation becomes delay. Consensus becomes paralysis. The organizations that moved early will be unreachable.

I’m betting on the latter.

The evidence I see — the pace of capability advancement, the patterns in adoption, the fractures already visible — points to a window that’s closing, not opening.

I’d rather be early and overprepared than late and overwhelmed.


This page is based on Borrowed Time — read the full analysis on Perspective Agents.

Last updated: January 18, 2026