The AI Divide Is the Agency Divide
The choice is yours. Will be become obsolete or will you be in control of your destiny
Two marketers work at the same company. Same title, same salary. Both have access to the same AI tools. Company policy encourages experimentation. They have access to near-infinite tokens. Both attend an all-hands meeting where leadership encourages them to use AI in their daily work.
One of them, Sarah, starts that afternoon. Within weeks, she's generating campaign variations at 4x her previous pace. She builds a custom workflow that turns client briefs into first drafts. She doesn't ask permission. She doesn't wait for best practices. She just does things.
The other, Mike, attends a training session but doesn't change his workflow. He's waiting for clearer guidelines. He believes his work is high quality and knows AI will make more mistakes than he will. He figures he'll adopt AI "when it's more mature." He's not opposed to the technology. He just doesn't feel equipped to integrate it himself.
Six months later, Sarah gets promoted. Mike gets fired.
This scenario is playing out across every industry right now. The gap between Sarah and Mike isn't about technical skill, intelligence, or even AI expertise. It's about something more fundamental.
The AI divide is actually an agency divide.
The Conventional Narratives Are Wrong
You've heard two stories about AI and jobs. Both are wrong.
Story One: "AI will take all the jobs."
This is the doom narrative. Automation is coming for everyone. Mass unemployment is inevitable. We're all going to be replaced by robots.
The data doesn't support this. The Yale Budget Lab studied employment patterns since ChatGPT's release and found that "AI has so far not led to widespread job loss." The Federal Reserve's Jerome Powell summarized it plainly: "It's not a big part of the story yet." AI-attributed layoffs in 2025 totaled about 55,000. Significant, but 4.5% of total layoffs. Not the apocalypse.
Job creation projections show net gains: 170 million new jobs by 2030 versus 92 million displaced. The picture isn't collapse. It's churn.
Now, I admit that these job creation projections are optimistic. I expect layoffs to accelerate as AI tools improve. But I also expect larger enterprises to lag on adoption.
Story Two: "We need regulation to protect workers."
This is the policy response. If we elect the right people and pass the right laws, we can insulate workers from technological displacement.
History suggests otherwise. Manufacturing unions watched 76% of their membership disappear despite strong protections. Steel tariffs ran nearly two decades straight through 1986, and employment still dropped 80%. Taxi medallion regulations created a regulatory monopoly that collapsed 94% in value when Uber sidestepped the rules. The EU AI Act was obsolete before implementation. Proposed in 2021, still not fully in force, while the technology has gone through four generations.
You cannot regulate your way to job preservation when the job itself becomes economically obsolete.
So if universal doom is wrong, and regulatory salvation is wrong, what's actually happening?
What Agency Actually Means
Here's the pattern conventional narratives miss: AI isn't creating a new divide. It's amplifying an existing one.
Jobs with high automation risk share a common trait: routine, repetitive tasks. Jobs with low automation risk share another: navigating ambiguity, synthesizing information, solving problems that don't have clear answers. Procurement clerks face 95% automation risk. People who figure out which problems to solve? Resilient.
One analyst framed it well: "The theme is not 'creative jobs' are declining. It's creative execution jobs are declining while creative problem-solving jobs are doing OK."
The difference isn't creativity itself. It's who follows instructions versus who figures out what to do when there are no instructions.
That's agency. Not technical skill. Not intelligence. Agency is a cluster of related traits:
Bias toward action
Some people just do things. They don't wait for permission, training, or the "right" moment. When they see a tool, they pick it up and start tinkering. Others wait. For guidelines, for certification, for someone to tell them it's okay.
Self-directed learning
Autodidacts who figure things out versus people who need formal training to feel qualified. Research shows 87% of top Spotify musicians never had regular training with music teachers. Meanwhile, 69% of developers are at least partially self-taught. The pattern repeats: in fields where demonstrated ability matters, those who learn without permission thrive.
Entrepreneurial mindset
Creating opportunities versus filling existing roles. Only about 8% of the population is truly entrepreneurial. Not defined by starting companies, but by the disposition to act without asking whether they have the right permits, certifications, or approvals.
This isn't about credentials. It's about whether you see yourself as someone who uses tools or someone who does things by the book. That distinction is about to matter more than ever.
The Acceleration Is Happening Now
The gap is no longer theoretical. OpenAI's enterprise data shows a 6x productivity gap between power users and median workers. Top coders send 17x more queries than typical peers. Workers saving 10+ hours per week use multiple models, more tools, and expand into domains previously inaccessible to them.
The research firm's conclusion: "Frontier workers are not just doing the same work faster; they appear to be doing different work entirely."
This shows up in career outcomes. Daily AI users report salary gains at 52% versus 32% for infrequent users. Improved job security: 58% versus 36%. The BCG/Harvard study found AI users completed 12% more tasks, 25% faster, at 40% higher quality.
Here's the uncomfortable part: training closes the gap temporarily, but underlying personality differences persist. Trained employees use AI at 93% versus 57% for untrained. They save 28% of time versus 14%. But early adopters, the high-agency types, had already figured it out before training existed.
The case studies are stark. Maor Shlomo built Base44 as a solo founder, sold it for $80 million in 6 months. Pieter Levels runs a $3M/year empire with zero employees across 40+ products. David Holz's Midjourney hit $500M annual revenue with roughly 40 people. One-person billion-dollar companies are in the near future.
These aren't genius-level intellects with decades of experience. They're high-agency people who treat AI as a multiplier for what they were already doing: taking action, learning as they go, building without waiting.
Meanwhile, freelance writing gigs dropped 42% since 2021. Copywriting is down 36%. Customer service roles are being cut by the thousands. The jobs disappearing fastest are the ones that look like following instructions. And following instructions is exactly what AI does better than any human.
Agency Is Learnable (But Hard)
Here's the question you're probably asking: Can I change? Or is agency something you either have or you don't?
The research is clear: agency is learnable. But it's not easy.
Psychologists break agency into constructs like self-efficacy (believing you can do things), locus of control (believing your actions determine outcomes), and personal initiative (acting without being told). All of these are malleable.
The most striking evidence comes from a randomized controlled trial in Togo. Entrepreneurs received either traditional business training or personal initiative training targeting the mindset of self-starting, anticipating problems, and overcoming barriers. The personal initiative group saw 30% profit increases. The traditional training group? 11%, not statistically significant.
Mindset training beat skills training nearly 3-to-1.
Other studies show locus of control shifting through outdoor education programs and therapy. Growth mindset interventions, even ones under an hour, improve grades and course selection in students. Self-efficacy responds to guided internet-based interventions.
But there's a catch. Neuroscience research reveals something uncomfortable: the brain's default state is to assume control is not present. Agency isn't natural. It's learned. And it can be unlearned through prolonged experiences of helplessness.
This is why circumstances matter. If you were raised in an environment that punished initiative, taught you to wait for permission, or consistently demonstrated that your actions didn't affect outcomes, you're starting from a harder position. That's real.
And yet the choice still exists. The research shows that mindset can change. The question is whether you'll do the work to change it.
Which Path Will You Choose?
Two futures are diverging.
In one, you keep waiting. Waiting for training. Waiting for guidelines. Waiting for AI to "mature." Waiting for your company to tell you what to do. Waiting for the job market to sort itself out. Waiting for regulation to protect you.
In that future, you compete against systems that follow instructions flawlessly, 24 hours a day, for fractions of a penny. You have skills, sure. But so does the model trained on all of human knowledge. Your main differentiator is that you're expensive and slow.
In the other future, you start using tools today. You experiment with what works. You build things without asking permission. You treat AI as a multiplier for your judgment, your creativity, your ability to identify what's worth doing. You don't wait to be taught. You teach yourself.
In that future, AI makes you more valuable, not less. Because you're not the person following instructions. You're the person deciding which instructions matter.
The question isn't "will AI take your job?" The question is: are you the kind of person who uses tools, or the kind who gets replaced by them?
The time to choose is now. Not next quarter. Not when your company figures out its AI strategy. Not when the technology is "ready." Now.
I don't know which kind of person you are. Neither, probably, do you. Most people have never had to find out.
But you're about to.