If the 20th century belonged to the textbook, the 21st belongs to the prompt. In lecture halls from Toronto to San Diego to Ho Chi Minh City, students are already co-writing their education with algorithms. Nearly 80% of undergraduates worldwide are already using generative AI, often daily. What’s missing is not adoption—it’s alignment.
While students are busy teaching themselves AI, most universities remain frozen between prohibition and pilot. Eighty percent of students report that they have no structured AI support for teaching or learning, even as employers accelerate toward AI-mandatory job descriptions. This is more than a skills gap. It’s pedagogical infrastructure debt—every semester without faculty readiness compounds the cost and complexity of catching up.
We’ve seen similar patterns of lagging technology adoption in past waves of edtech innovation. When learning-management systems first appeared, they were largely framed as administrative upgrades. For many institutions, that framing worked: LMSs streamlined workflows, centralized compliance, and made it easier for faculty to post resources and communicate with students. Today, 99% of colleges report having an LMS, and 87% of faculty use one. But most of that use remains logistical rather than pedagogical—more about distributing syllabi and quizzes than redesigning courses around new capabilities.
When MOOCs surged a decade ago, they unlocked access for millions who might never set foot on campus. That was a genuine democratization of content. But most universities launched them without fundamentally rethinking teaching models. Completion rates hovered in the single digits—typically between 3% and 15%—and many MOOCs ended up as repackaged lectures with minimal interaction. Instructors logged 100+ hours preparing courses and spent 8–10 hours a week maintaining them, yet the gains in learning outcomes were modest.
Neither the LMS nor the MOOC era was a failure—they brought efficiency, visibility, and reach. But they also carried an opportunity cost. In both cases, much of the narrative, innovation, and even data ownership shifted to external platforms.
That loss of control mattered less when the stakes were content delivery. With AI, the stakes are cognitive—determining how future leaders will think, decide, and solve problems. If institutions approach AI as a bolt-on feature rather than a faculty-driven transformation, they risk outsourcing not just content delivery but the very definition of academic rigor. AI could follow the same trajectory as past edtech waves—unless we change who’s in the driver’s seat.
Faculty As The Missing Link In The AI Workforce Pipeline
In boardrooms and government offices, the conversation about AI is no longer about if but how fast. McKinsey estimates AI could add up to $23 trillion annually to the global economy by 2040, with gains concentrated in sectors that can re-skill their workforces quickly. But here’s the thing: Employers don’t want graduates who don’t know how to use AI. GMAC’s latest survey of 1,100 employers reveals that AI fluency is rapidly becoming a hiring requirement across industries, making this educational gap not just pedagogical but economic.
That capacity for AI fluency doesn’t come from a few elective workshops or a tech bootcamp—it’s forged in classrooms and research centers with sustained, discipline-specific learning. And the gatekeepers are faculty. If we treat AI integration as a technology problem, we get disconnected pilots and compliance documents. If we treat it as a talent pipeline problem, we invest in faculty as the designers of tomorrow’s workforce capabilities.
Where Faculty-Driven AI Works
At the University of Toronto’s Rotman School, a small teaching team decided that if students were going to use AI, it should be on the faculty’s terms. They trained “All Day TA” entirely on their own course materials—lectures, readings, problem sets—so that when students posed questions, the answers came through the same conceptual frameworks they’d be tested on. By semester’s end, the assistant had fielded over 12,000 questions. Instructors weren’t replaced; they were relieved of repetitive clarifications and free to focus on the discussions that require a human mind.
Half a world away at British University Vietnam, the starting point was different: academic misconduct reports were rising, and administrators worried that AI use was undermining rigor. Instead of banning the tools, faculty built the AI Assessment Scale, a visible cue on every assignment indicating whether AI was prohibited, permitted, or required. Students no longer had to guess whether a chatbot was fair game. The clarity did more than curb misconduct—it lifted average attainment by nearly six percent and raised pass rates by a third. Faculty didn’t lower the bar; they made sure everyone knew exactly where it was and how to clear it.
And then there’s UC San Diego, where leadership has moved beyond isolated experiments to an institutional commitment that puts faculty at the center. This fall, select instructors will receive AI assistants built on the university’s secure TritonGPT platform—trained on instructor-uploaded syllabi, readings, and lecture notes, and tuned to each professor’s voice and pedagogy. The AI engages students in Socratic dialogue, guiding them toward insights rather than simply delivering answers.
“Faculty are using the platform to generate course content from their own materials, while students can access virtual tutors,” said Brett Pollak, Executive Director of Workplace Technology & Infrastructure Services at UC San Diego. For faculty, the system is both a teaching tool and a feedback loop, with detailed analytics showing how students engage with the material—revealing where they struggle and where they’re ready for more challenge. Our next step is to scale adoption through an opt-in approach.
UC San Diego’s infrastructure leverages “low-cost and open-source frameworks” to create AI agents that streamline campus administration and securely connect institutional data to large language models—without relying solely on commercial licenses.” Pollak added.
The same philosophy extends to administrative work. One homegrown tool, Contract Reviewer, automatically applies university policy to routine agreements, cutting review time and reducing bottlenecks. The message is clear: AI is here to enhance the full scope of faculty work, from the classroom to the committee room.
The Time Is Now
The window for proactive AI integration is closing rapidly, and the pressure on institutional leadership is mounting. EDUCAUSE’s 2025 survey of technology leaders reveals that 75% report excessive workloads while only 16% believe they have sufficient staff to meet their goals. More telling, when it comes to AI professional development, institutional priorities lean heavily toward “mitigating risks rather than supporting opportunities”—exactly the defensive posture that may leave faculty to navigate AI integration alone.
If colleges fail to activate faculty now, students will continue learning AI informally—without the ethical grounding, domain rigor, or reflective habits that distinguish competent professionals from sophisticated prompt-writers. Nearly half of technology leaders (45%) now prioritize “supporting and securing emerging technologies such as AI” as their top professional development need—yet most remain reactive, focusing on governance, ethics, and compliance. Far fewer are seizing the opportunity to help faculty apply AI in research (30%), teaching (30%), or student services (24%).
The path forward isn’t “AI or integrity”—it’s integrity through intentional AI, led by faculty. Universities like UC San Diego, alongside focused successes at Toronto’s Rotman School and British University Vietnam, prove that comprehensive, faculty-centered AI integration isn’t just possible; it’s already happening.
Faculty are the engine. Give them the vision, structures, and support they need, and they’ll not just transform higher education—they’ll secure our place in the AI-driven economy.