Last year, a workforce summit hosted thousands of leaders.
I joined a Stanford professor and a university provost on a panel. The topic: AI’s impact on jobs. The keynote had already delivered the verdict. Artificial intelligence will surpass human intelligence sooner than we think. Most knowledge work will be automated. The labor market won’t survive.
The main stage ran on doom. The product pavilion ran on promise. One founder showed me an AI system that would close the achievement gap on its own. Learning modules tailored to every student, delivered by AI, no teacher required.
Both sides were selling the same thing. Dystopia tells you not to invest in people because the jobs won’t exist. Utopia tells you not to lead because the machine will handle it. Both describe what the technology will do to you.
Neither asks what you will do with it.
That same week, we launched Andus Labs. It opened with a video: Hope is a funny thing. It featured Thomas Wagner, a marketing manager at YouTube. In 2023, his seven-month-old son Max had a seizure, then another. Three weeks of tests ended with a diagnosis: Alexander disease, a rare, fatal neurodegenerative disorder with no cure and a life expectancy of five to ten years.
Wagner had no medical credentials or access to research labs. But he had something that could close the distance on both: the same AI the rest of us have. He opened Gemini and started asking questions. It returned research papers and medical terminology he had no training to read. He kept pushing. Each exchange sharpened the next question. He carried what he learned to the researchers who could act on it.
His emails were so thorough that Pranam Chatterjee, a biomedical engineer at Duke, thought he was talking to a colleague. Wagner called it “connecting things that are just lying around unconnected.”
The protein that causes Alexander disease, GFAP, is also pathologically elevated in Alzheimer’s. Researchers in rare pediatric neurology and in Alzheimer’s had no reason to talk to each other. Wagner, learning both fields through Gemini, saw what neither could see alone. Nobody had put the two together. Wagner did. Ten labs are now investigating Alexander disease. A year earlier, none.
A month after the Andus Labs launch, we brought people from around the world together for an event called “AfterNow.” It took on a question most AI summits overlook: what do humans do with AI now?
The program included Brian Eno, the musician and artist; Nicholas Thompson of The Atlantic; MIT neuroscientist Nataliya Kosmyna; and Esther Dyson, one of the earliest investors in the modern internet. Julia Dixon closed it. A year earlier, she wouldn’t have been in the room.
Dixon was a cultural strategist at my previous agency. On weekends, she helped kids write college essays. Half a dozen clients. She saw the same problem every cycle: a student with something worth saying, flattening it into safe admissions language. She wasn’t a coder and had never raised capital or built a product.
She asked ChatGPT if it could help her build something better. First about how to build a product, then about fundraising, then decisions to be made no one had prepared her to make.
The side hustle became ESAI, a platform serving half a million students with a larger mission: help Gen Z tell their stories without the admissions filter. She pitched it on Shark Tank, a cultural strategist who had learned to code from a chatbot, pitching Mark Cuban. He invested. She applied to one of the most competitive AI camps for new entrepreneurs. Betaworks selected her. After a day of ideas and theory, Dixon closed with a story about doing. It hit hardest.
That’s the Third Option: make something meaningful.
The boundaries that feel the most fixed are the ones in our heads. Wagner has no choice. Dixon has a problem she can’t let go of. Many of us have neither crisis nor obsession to act on. We have a role that works, a career built on what the market used to protect, and a growing suspicion that the protection is ending. That suspicion is the starting point to move.
Marshall McLuhan studied what technology does to people. He described the blockage to act decades ago: a “deep-seated repugnance” against understanding the forces reshaping our world, because understanding demands “far too much responsibility for our actions.” Doom and utopia are appeals to that repugnance. Both say the outcome is decided. Both say responsibility is not yours to take.
The Third Option requires you to act, which is why it’s the hardest of the three.
There is a version of doing fine that doesn’t look like a problem. It looks like a mortgage, two kids approaching college, spending to your means. You built the life you were told to build. You landed roles that feel inevitable because they’ve always been there.
Every few weeks a headline lands about your industry and you wonder how close it is. Then you close the tab and get back to work. The economist’s forecast will land harder than anyone in that room wants to hear.
I’ve sat in that chair. Doom and utopia both feel like the thinking is done. Both are positions you can hold without moving.
Your way out won’t arrive as a diagnosis or an obsession. It starts when you use AI to help you take on something important, the question you’ve been carrying because asking it out loud would mean admitting you don’t know. The harder part comes after: understanding if the answer is right, and if so, how to act on it.
Max is three now. “He’s a happy little man with a big personality,” Wagner said. “Wherever we go, he needs to meet everyone.” His disease is progressing. Ten labs are running.
Wagner continues to move. Will you?