The Future of Work with AI Agents — Insights from a Stanford Study
Critical Mismatches & Opportunities for AI Agent Development
A recent study from Stanford University offers a fresh perspective on how AI Agents might shape the future of work.
Conducted between January and May 2025, the research introduces a comprehensive framework that balances worker preferences with technological capabilities, providing a roadmap for the evolving workplace.
Research indicates that around 80% of U.S. workers may see LLMs affect at least 10% of their tasks.
A New Language for Human Involvement
At the heart of the study is the Human Agency Scale (HAS), a five-level system designed to quantify the degree of human involvement desired in various tasks.
The Study introduces a Human Agency Scale (HAS) as a shared language to quantify the preferred level of human involvement.
Ranging from H1 to H5, this scale moves beyond the traditional automate or not debate, categorising tasks where AI excels at full automation (H1-H2) and those where human agency remains essential for augmentation (H3-H5).
This nuanced approach reveals that different levels of human input suit different AI roles, challenging the notion that higher automation is always preferable.
Usage data from Anthropic indicates that in early 2025, at least some workers in 36% of occupations already were using AI for at least 25% of their tasks.
Diverse Expectations Across Occupations
The findings uncover a rich tapestry of HAS profiles across different jobs, reflecting varied expectations for human involvement.
Workers often prefer higher levels of agency than experts deem technologically necessary, suggesting a gap that could influence AI development priorities.
This mismatch points to critical opportunities for tailoring AI Agents to meet human desires, rather than forcing a one-size-fits-all solution.
For 46.1% of tasks, workers express positive attitudes toward AI agent automation
The study also explores which occupations stand out on the HAS, hinting at unique collaboration patterns between humans and AI.
Workers envision a future where AI handles repetitive, low-value tasks, freeing them to focus on more meaningful work.
The research suggests that AI Agents could fundamentally reshape core human competencies, shifting the focus from information management to interpersonal strengths.
This shift could redefine workplace skills, with technical and information-heavy abilities becoming vulnerable, while interpersonal skills like planning, teaching and communication rise in value.
Where do workers resist AI agent automation? 28% of workers express negative sentiments about AI agent automation in their daily work.
Top concerns: Lack of trust in AI accuracy, capability or reliability - 45%.
Fear of job replacement — 23%.
Absence of human qualities in AI, such as human touch, creative control, and decision-making agency — 16.3%.

Skills Most Vulnerable to AI
Below, a striking visualisation compares skills based on average wages and required human agency.
Green lines indicate skills that gain rank when judged by human involvement over pay, suggesting undervalued roles needing more human input.
Red lines highlight well-paid skills that rely less on human effort, often tied to automation-friendly tasks like data processing.
This analysis signals a potential revaluation of skills, steering the workforce toward human centered competencies.
Which skills are most vulnerable to AI…and which skills will increase in value?
This image compares workplace skills based on two different factors: 𝗵𝗼𝘄 𝗺𝘂𝗰𝗵 𝘁𝗵𝗲𝘆 𝗽𝗮𝘆 (𝘢𝘷𝘦𝘳𝘢𝘨𝘦 𝘸𝘢𝘨𝘦, 𝘴𝘩𝘰𝘸𝘯 𝘰𝘯 𝘵𝘩𝘦 𝘭𝘦𝘧𝘵) and 𝗵𝗼𝘄 𝗺𝘂𝗰𝗵 𝗵𝘂𝗺𝗮𝗻 𝗶𝗻𝘃𝗼𝗹𝘃𝗲𝗺𝗲𝗻𝘁 𝘁𝗵𝗲𝘆 𝗿𝗲𝗾𝘂𝗶𝗿𝗲 (𝘩𝘶𝘮𝘢𝘯 𝘢𝘨𝘦𝘯𝘤𝘺, 𝘴𝘩𝘰𝘸𝘯 𝘰𝘯 𝘵𝘩𝘦 𝘳𝘪𝘨𝘩𝘵).
Each line represents a specific skill, or what the study calls a “Generalised Work Activity”.
𝗚𝗿𝗲𝗲𝗻 𝗹𝗶𝗻𝗲𝘀 show skills that move up in rank when judged by human agency compared to wage — suggesting these skills require more human involvement than their pay might reflect.
𝗥𝗲𝗱 𝗹𝗶𝗻𝗲𝘀 show skills that drop in rank — meaning they pay well but involve relatively less human input, often tied to data processing or automation-friendly tasks.
Overall, the image suggests a shift in what might be most valued in the future: from technical or information-heavy skills toward more human-centred skills like planning, teaching and interpersonal communication.
Skills in Transition
The image below shows the distribution of tasks based on two dimensions:
Worker-Desired HAS Level (rows): The level of Human-AI Symbiosis (HAS) that workers prefer, ranging from H1 to H5.
Expert-Assessed Feasible HAS Level (columns): The level of HAS that experts deem feasible, also ranging from H1 to H5.
The highest concentration (112 tasks) is where workers desire H2 and experts assess H2 as feasible.
Other notable concentrations include 114 tasks at H3 desired/H3 feasible, and 102 tasks at H2 desired/H2 feasible.
Lower numbers (e.g., 0 or 1) appear where there is a mismatch, such as H1 desired/H5 feasible or H5 desired/H1 feasible.
This suggests a general alignment between worker preferences and expert feasibility, with the most tasks clustering around H2 and H3 levels.
Implications & Challenges
The research suggests that AI Agents could fundamentally reshape core human competencies, shifting the focus from information management to interpersonal strengths.
This evolution calls for proactive workforce development, including reskilling and retraining programs to prepare workers for new dynamics.
However, the study acknowledges limitations. It relies on existing occupational data, which may not account for emerging tasks spurred by AI.
Additionally, workers’ responses might reflect limited awareness of AI’s evolving capabilities or concerns about job security, though the study mitigates this by prioritising worker perspectives and ensuring robust representation.
As technology advances, this landscape will shift, necessitating future audits to track long-term trends.
By placing workers at the centre, the study advocates for a collaborative approach, empowering them to shape AI systems that reflect their values and concerns.
Looking Ahead
This Stanford study provides a timely baseline for understanding the interplay between AI Agents and human work.
As the workplace continues to evolve, its findings encourage a balanced approach — leveraging AI’s potential while preserving the human element.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.