Universities worldwide are racing to establish policies, protocols, and dedicated centers to govern AI use while fostering human-centered AI literacy. As AI becomes more entrenched in education and employment, universities will play a vital role in preventing AI from replacing human thinking while leveraging its power to augment learning for long-term benefits to society. In these early years of AI, universities are developing a rapidly evolving ecosystem of institutional frameworks, AI literacy requirements, and specialized centers to respond to our new “age of AI.”
Developing AI Disclosure Protocols
Most faculty and institutions immediately recognize the threats AI brings to academic integrity and have developed a patchwork of policies for AI use to use AI to supplement thinking instead of replacing it. Washington State’s K-12 AI policy requires students to disclose and acknowledge AI assistance. The University of Arizona’s policies mandate that students document their process and reflect on AI’s influence on their work, while faculty are encouraged to maintain open dialogue about AI, coursework, and integrity. In the UK, the 24 Russell Group universities developed shared principles in 2023 to ensure students become “AI-literate” and ethical in their usage. Harvard’s MetaLAB proposed an AI Code of Conduct in which students list the AI tools used, how each was deployed for ideation, research, editing, or debugging, giving information about how prompts and outputs were structured, and disclosing where AI-generated content appears in submitted work.
Students are already way ahead of faculty in AI usage. One 2024 study showed that 86% of students use AI in their work, 54% weekly, and nearly 25% daily. At the same time, ITHAKA’s 2025 survey data shows that while 72% of instructors have experimented with AI, only 14% feel confident in using it effectively, and only 28% of institutions have formal AI policies, with another 32% actively developing them.
Building AI Literacy Across Campus
Several institutions have already taken proactive approaches, however. Ohio State University’s AI Fluency Initiative aims to make students “bilingual” in AI and its applications within their major field, beginning with AI integrated into first-year seminars, and through dedicated courses such as “Unlocking Generative AI,” as its Teaching and Learning Center helps faculty integrate AI across the curriculum. The University of South Carolina has established a 12-credit-hour AI Literacy Certificate to prepare students for an AI-driven job market. SUNY has adjusted its General Education requirements to include AI ethics and literacy, requiring students to understand how AI impacts “the ethical dimensions of information use, creation and dissemination.” The University of North Carolina at Chapel Hill has developed a platform for AI resources including teaching guidelines, and listings of AI-intensive courses. Carnegie Mellon’s AI for Humanity course emphasizes some of the limitations of AI and encourages AI uses for “human education and social good.”
Dedicated University AI Centers and Initiatives
Beyond policies and frameworks, many leading universities have developed AI centers and institutional initiatives to catalyze innovative and human-centered uses of AI in education and research. ASU’s comprehensive AI ecosystem, developed in partnership with OpenAI, includes an AI Innovation Challenge that has received over 530 proposals, with over 250 projects to advance AI use in research, operations, and pedagogy. ASU’s ChatGPT-powered chatbot SAM helps health science students improve patient-provider interactions, and an AI Writing Companion provides real-time writing feedback for students. Tech CEO and recording artist Will.I.Am has partnered with ASU to bring the FYI AI system to campus which uses voices from underrepresented communities to make AI interactions more inclusive.
MIT’s Responsible AI for Social Empowerment and Education (RAISE) initiative has developed tools for K-12 educators that integrates hands-on AI and robotics experience while emphasizing responsible AI use. MIT’s annual AI Education Summit gathers thought leaders from around the world to discuss AI usage, and MIT’s Advancing Humans with AI (AHA) initiative addresses how humans respond to AI systems as both an engineering challenge and a human design problem.
Developing Human-Centered Approaches to AI
Stanford’s Human-Centered AI Institute (HAI) is built upon has three pillars: understanding AI’s societal challenges, augmenting human capabilities, and developing AI aligned with human language and emotions. HAI has sponsored hundreds of AI research grants across campus, houses the Center for Research on Foundation Models, the Stanford Digital Economy Lab, and partners with the historic Stanford AI Lab (founded in 1963). Its globally recognized AI Index Report tracks AI’s technical, economic, and societal impacts. UC Berkeley’s Center for Human Compatible AI (CHAI) was founded in 2016 by AI pioneer Stuart Russell, inspired by his belief that “in the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans.” CHAI brings together dozens of faculty investigators and affiliates to provide “the conceptual and technical wherewithal to reorient AI research towards provably beneficial systems.” Carnegie Mellon University’s Generative AI Teaching as Research (GAITAR) initiative builds on CMU’s long history in developing adaptive learning tools, intelligent tutoring systems, and personalized learning experiences that demonstrably improve student learning. CMU’s Block Center for Technology and Society has partnered with MIT’s Future Tech initiative to study AI’s workforce impact, to help workers benefit from AI and help mitigate job displacement.
Institutional AI Resources for Faculty and Students
UT Austin developed UT Sage, a platform engaging students in Socratic dialogue across subjects. UT’s “AI-Responsible AI-Forward” framework provides guidelines the challenges from AI in privacy, hallucinations, bias, ethical issues and the risk of “cognitive offloading” which can “diminish specific cognitive skills.” The University of Michigan is one of the first universities to provide a custom suite of closed, generative AI tools that includes the Maizey tutoring system, U-M GPT, the U-M GPT Toolkit, and the Go Blue mobile AI assistant. UM’s Maizey platform provides customized chatbots that integrate course materials, video lectures, exams, and grading policies, and the university offers a platform that provides access to all of the major large language models. These are all part of UM’s institution-wide effort to develop AI with “inclusion, equitability and accessibility at the forefront.”
The collective impact of AI on higher education is only beginning to be felt. As Jose Antonio Bowen and C. Edward Watson, authors of Teaching with AI, warn, “neither ‘just say no’ nor ‘figure it out on your own’ will suffice.” The institutions mentioned above provide just a glimpse of how higher education is grappling with how to harness AI’s transformative potential while preserving human agency, creativity, and critical thinking. These advances, and work at thousands of other universities and colleges, will help shape the “Age of AI” to deepen student learning and advance human capacities in our future.
