
As generative artificial intelligence continues to gain purchase within higher education, much digital ink has been spilled attempting to predict what AI might mean for teaching, learning and the future of the academy. Much of the media commentary has focused on AI’s power, promise and, oftentimes, its pitfalls. Discussions both on and off campus have fixated on which AI platforms to use and what outcomes they might produce.
Yet there’s too little attention being paid to what humans might actually need from AI. In parallel with discussions over how to regulate AI on their campuses, institutions should be asking how they can use this emerging technology to support their students and faculty. If higher education wants to make better use of AI, we need to flip the script on these urgent conversations.
As institutions slowly develop their thinking on the topic — less than a quarter of U.S. institutions have implemented acceptable AI usage policies, according to a 2024 survey by EDUCAUSE — many faculty either permit their students to use whatever AI apps whenever they’d like, or they ban AI altogether. This isn’t a sustainable — or responsible — approach. This lack of intentionality has led to confusion around academic integrity, disagreements on appropriate use concerning AI, and increased risk for institutions that crack down on AI before establishing consistent guidelines.
The simplest approach would be to lower the proverbial drawbridge, to let 1,000 flowers bloom, to permit students to run wild. It’s an easy case to make. Nearly every college student is now using AI. And in the three years since ChatGPT stormed the stage, students have gotten better every semester at using AI to search for information, check grammar on their assignments and summarize documents they need for class.
Yet students are conflicted. I believe they don’t want to cheat. They also don’t want to forgo a powerful and convenient tool that will support their coursework and sharpen their skills.
Institutions need to put forth an alternative to the all-or-nothing approaches that pervade college campuses. That means permitting and encouraging AI use in classrooms and research labs that addresses actual challenges and solves real problems. This requires institutions to ask students and faculty how AI can help them become better learners, better teachers and better scholars.
The good news, based on my experience in higher education, is that the majority of people want AI to be used responsibly. Faculty in particular are interested to see if AI can help augment teaching and, in turn, help their students learn a little better. But many professors are rightly intimidated and maybe a little demoralized by its rapid acceleration. Faculty — and their departments and their institutions — are under immense budgetary and political pressure to deliver better-every-year returns on investment for the tuition and tax dollars they receive. They’re also dealing with myriad vendors who often promise grand AI solutions to problems that have little to do with real challenges of pedagogy and scholarship.
Some institutions are already demonstrating how they can use AI to address real challenges. The University of North Texas (UNT), where until recently I served as vice president for digital strategy and innovation, approached AI from a faculty-first standpoint. Specifically, professors asked how emerging technology could help them engage in more impactful writing instruction.
Writing is a fundamental, trusted and valuable form of assessment. It shows how well students can build arguments, analyze data, express their ideas clearly and concisely, and think critically about the topic at hand. Yet despite its role as a trusted form of assessment, assigning papers in the high-enrollment classes that predominate higher education today can be a fool’s errand. The grading simply takes too long for a single professor, even if they are fortunate to have the support of a teaching assistant. In these scenarios, the writing assignments are often removed in favor of lesser assessments.
To solve for this problem, UNT had adopted an AI-powered learning platform to manage online discussions for classes, and we wondered how instructor follow-up might affect student writing. In a three-semester study of nearly 15,000 discussion board posts made by more than 800 students, University of North Texas researchers found that the presence of AI-powered evaluation coincided with students digging more deeply into topics and writing more detailed questions and answers for discussion board assignments. The study also found that struggling students who received private coaching from their instructors subsequently produced longer and higher-quality posts. The researchers concluded that the right AI-driven tools in the proper situations enabled professors to assign writing again, and freed up time for targeted and creative human interventions that can have lasting effects.
It’s tempting for institutions to take an either-or approach to AI because both options are simple to implement and enforce. But as AI becomes more pervasive in higher education and throughout all other facets of modern life, institutions should be seeking ways to promote responsible use of this powerful new technology. In addition to discussing AI rules and regulations, institutions need to center their conversations on people and pedagogy.
Adam Fein is an adviser for the AI-powered learning platform Packback, senior vice president for product innovation and partner solutions for the online learning company Risepoint, and a former vice president of digital strategy and innovation for the University of North Texas.