Artificial intelligence is moving quickly into the workplace, but not always in the ways people expect. A new study out of Stanford University which surveyed 1,500 U.S. professionals across 104 occupations, offers a rare, detailed look at how workers across industries want AI agents to be used in their jobs. Instead of asking what AI could automate, researchers asked workers what they’d prefer it to automate—or augment—and how much human involvement should remain.
Roughly 46% of tasks were flagged by workers as appropriate for automation, particularly repetitive or time-consuming activities like appointment scheduling, routine reporting, and data entry. For 45% of occupations, the most common preference was equal partnership between human and AI. This suggests a strong interest in AI systems that collaborate, rather than replace.
These findings have immediate implications for higher education. The sectors studied reflect many of the careers students are preparing to enter.
Universities Seizing The Agentic AI Advantage
McKinsey’s Seizing the Agentic AI Advantage report notes that while 78% of companies have deployed generative AI tools, only a small fraction report meaningful impact. Most companies start with tools like Microsoft Copilot, ChatGPT, or Google Gemini. These are typically horizontal copilots—general-purpose tools for writing, summarizing, or brainstorming across many roles.
The issue is that many organizations stop there, using GenAI tools as assistants for individual productivity (e.g., helping an employee write emails or draft a document). These use cases often don’t change how work is structured, so the impact remains limited.
McKinsey contrasts this with agentic AI systems that are embedded into workflows; meaning they take action, make decisions within guardrails, and solve problems in a domain-specific, goal-oriented way (like admissions, student advising, or academic research support). These vertical agents, when built with clear integration into business processes, are what lead to meaningful impact.
At Georgia State University, for example, an AI agent named Pounce proactively reminds students about deadlines, financial aid steps, and registration. A randomized controlled trial showed that students who interacted with Pounce were 3% more likely to persist to the next semester. For low-income, Pell-eligible students, the intervention reduced the likelihood of receiving a D, F, or withdrawing (“DFW”) by around 20%.
The University of Michigan’s Ross School of Business has piloted a virtual teaching assistant built on Google’s Gemini model. The AI helps students reason through finance and analytics problems using guided prompts and Socratic questioning. It also provides instructors with insights on where students are struggling.
Penn State University is launching MyResource, an agentic AI assistant trained on institution-specific data that helps students navigate services across advising, mental health, financial aid, and more. The assistant will operate 24/7 and is designed to deliver accurate, personalized recommendations.
In admissions, the University of West Florida deployed an AI-powered recruiting agent that engages prospective students across multiple channels. The tool led to a 32% increase in graduate admissions yield. Also in admissions, Unity Environmental University’s agent Una guides prospective students through finding a program and completing an application; reducing friction in the enrollment process.
Beyond student-facing tools, InsideTrack, a national student success nonprofit, is developing an internal data agent that reads coaching notes and flags emerging themes for human staff to act on. It’s not a replacement for coaching—it’s a backend agent for surfacing patterns and reducing manual analysis.
While these examples show early results, many institutions remain cautious or unclear on how to proceed. For institutions seeking to move forward, a few steps can help ensure responsible and strategic adoption:
- Start with clear use cases. Identify where students or staff experience friction—advising bottlenecks, administrative delays, repetitive outreach—and explore whether an AI agent could assist.
- Pilot and iterate. Small-scale trials, like using an agent in a course or a department, allow for safe experimentation. Monitor impact and adjust.
- Keep humans in the loop. Most successful deployments combine AI automation with human judgment. Set boundaries for where human oversight is required.
- Establish guidelines. Align AI adoption with institutional values. Clarify what is acceptable for coursework, communication, and data handling.
- Invest in training. Faculty, staff, and students need support in understanding how to work alongside AI. This includes both technical and ethical dimensions.
- Collaborate. Share learnings across institutions, especially as standards and practices continue to evolve.
The message from the workforce is clear: AI’s value is in working alongside humans—streamlining drudgery, supporting expertise, and amplifying what we do best.
Colleges and universities that want to prepare students for the reality of modern work must stop viewing AI as an add-on or a passing trend. Agentic AI is shaping how admissions, advising, learning, and student support are delivered and producing measurable results. Yet many institutions are still hesitating at the starting line, waiting for perfect answers.
The time to act is now. Start small, stay strategic, and put human needs at the center of every deployment. Pilot practical solutions, invest in skills and ethics training, and build on what works. Most importantly, ensure that every step forward is guided by the lived realities and aspirations of both students and staff.
The future of work—and higher education—will be defined by those who can leverage AI as a true collaborator. As agentic AI moves from buzzword to campus backbone, the colleges and universities willing to lead will shape not only their own futures, but the futures of all those they serve.