AI Adoption in Higher Education Is Outpacing Shared Standards

Artificial intelligence (AI) has arrived in the academy with the force of a structural disruption, and higher education, for all its claims to adaptability, is finding itself genuinely unprepared. From the drafting of application letters to the grading of final assessments, AI is reshaping the pedagogical landscape faster than institutions can establish coherent AI literacy frameworks for engaging with it. Business schools, historically positioned as bellwethers of applied innovation, are caught in a peculiarly uncomfortable position: championing AI systems whose implications they cannot yet fully articulate nor chaperone.

The Proliferation of AI Courses Outpaces Policy

The problem is not a lack of enthusiasm. Courses with AI in the title have proliferated sharply across graduate business programs, and the roster of institutional experiments is growing. Cambridge’s Judge Business School has developed an AI-powered teaching case in which students hold real-time dialogue with simulated C-level executives. IMD built a GPT for their MBA cohort that students can use as a ‘sparring partner’. At HEC Paris, professors are cautiously exploring AI as a grading assistant. Northeastern University is teaching AI literacy to “fellows” within the faculty so they can cascade that knowledge across departments. By one estimate from the Northeastern Center for Inclusive Computing, nearly 730 AI-focused undergraduate computing programs now exist across U.S. universities alone. The groundswell is real.

What is conspicuously absent, however, is a shared framework for evaluating any of it. The Digital Education Council’s recent attempt to propose benchmarks, spanning topics such as curriculum integration, pedagogical innovation, institutional governance, and ethical accountability, reflects a growing recognition that adoption without measurement is, at best, chaotic and, at worst, counterproductive. Yet translating those principles into frameworks to prepare students with AI literacy skills and into workable metrics is where the difficulty concentrates.

How Students Are Actually Using AI in Higher Education

That question of what students are actually using AI for in the classroom cuts to the heart of the matter. Research by Anthropic, analyzing one million anonymized student conversations, found that nearly half (approximately 47%) of student-AI conversations were “direct”. That is, their prompts were plainly seeking answers or content with minimal engagement with Anthropic’s widely used LLM, Claude. 

A significant proportion were using AI tools purely transactionally. Outsourcing critical thinking, offloading analysis and problem-solving tasks, rather than using AI as an interlocutor to deepen their understanding. The distinction is epistemologically significant. There is a meaningful difference between using a tool to extend one’s thinking and using it to circumvent the thinking entirely. The latter may produce a competent-looking assignment while leaving the cognitive work undone. 

Meanwhile, the DEC’s 2024 Global AI Student Survey, drawing on responses from 3,839 students across 16 countries, found that only 5% of students indicated that they were fully aware of AI guidelines and felt they were fully comprehensive. This spotlights a striking gap between tool adoption and institutional guidance. 86% of students use AI in their studies, with 54% using it weekly, yet the governance infrastructure to support that usage responsibly is, in most institutions, still embryonic.

How Faculty Are Using AI in Business Schools and Universities

Among educators, the patterns are somewhat more encouraging, though not without their own concerns. Anthropic’s companion research on educators, based on approximately 74,000 anonymized conversations, found that many faculty use AI technologies collaboratively, for developing course materials and drafting grant applications. But a notable fraction have delegated grading to automated systems that, by their own admission, are poorly suited to the task. Students are not the only ones succumbing to the temptation to let the machine handle the labour-intensive tasks.

The DEC’s 2025 Global AI Faculty Survey adds further texture to this picture: 83% of faculty expressed concern about students’ ability to critically evaluate AI-generated outputs, while 80% reported a lack of institutional clarity on how they teach students to be AI literate. The governance gap, in turn, affects those doing the teaching as much as those being taught.

The AI Governance Gap in Higher Ed and What it Means

What emerges from all of this is a tension that no benchmark can fully resolve. The pressure to integrate AI fluency into every dimension of education collides with the absence of any consensus on what AI-mediated learning actually looks like when it is working well. 

One dean speaks of “infusing AI into everything” while acknowledging there is no road map. Schools pilot divergent policies; some are permissive, some are restrictive. All are without a common evaluative language to compare results.

The ethical dimension hangs over all of it. Questions around responsible AI, academic integrity, privacy, and the uneven distribution of AI access along institutional resource lines remain structurally unresolved. Only 4% of faculty report being fully aware of their institution’s AI guidelines and finding them comprehensive. This figure should give pause to any administrator who believes policy documents alone constitute governance.

This is not an unprecedented situation. New technologies have always arrived ahead of the institutions designed to govern them. What is unusual in this case is the speed of the disruption and the degree to which it implicates the fundamental purpose of education itself: the cultivation of mind, not merely the production of output. 

Getting the standards right for AI literacy matters now more than ever. The alternative is an institution that has automated the appearance of learning while quietly allowing the substance of education and learning to erode.

Stay ahead of industry trends with insights from master’s students around the world.

You Might Also Like…

The Jobpocalypse Is Real, but AI Isn’t To Blame

The Jobpocalypse Is Real, but AI Isn’t To Blame

1.2 million job cuts in 2025. Viral warnings of AI-driven mass unemployment. A generation of anxious graduates. The headlines are alarming, but the data tells a more complicated story about what is actually driving the disruption. Get the facts and some real advice for master’s students; it is not as grim as you might think.


Read more