12 February 2024
After my junior year of college—my first fully normal year as a college student—I was convinced that nothing could ever be as transformative or disruptive as the COVID pandemic, which temporarily upended and restructured nearly everything about the college experience.
However, the arrival during my senior year of widely available artificial intelligence systems made me rethink this notion.
I do not mean to minimize the awfulness of the pandemic: the loss of life, the debilitating isolation, the job loss. The concentrated effects of the pandemic cannot compare to anything I have experienced in my lifetime. The span of events known as the pandemic could turn out to be my generation’s most singular experience.
Universities eventually rebounded from COVID, though. Classes have largely returned to in-person instruction, mask mandates are no longer in effect and gone are the all-encompassing rules which governed students’ lives outside of the classroom.
However, artificial intelligence—namely OpenAI’s ChatGPT—will not go away. Universities will not “rebound” from the sudden prevalence of these language models; they will be forced to adapt to them. It now seems that my college experience was bookended by two transformative happenings: the COVID pandemic and the advent of artificial intelligence in education.
AI products have been around for decades, but what makes this year different is the successful implementation of Large Language Models capable of generating content that can pass as human-authored. As word of the impressive results of the latest version of ChatGPT spread, students and professors began immediately using it in all kinds of ways to help them create content in minutes that would normally take hours or even days.
The lack of total AI bans in the collegiate world indicates that AI is here to stay. There’s no alternative to grappling with it.
But with the convenience comes concern. “Hallucinations” in the content—sources and material invented by the program—bedevil results. And professors are leery of students relying on ChatGPT too much—to cheat by using it to complete entire assignments or essays, for instance.
Schools are already crafting acceptable use policies for generative AI systems. They vary in content, but most delegate the responsibility of crafting a policy to individual professors. Many offer advice on how to better craft essay prompts to avoid entirely AI-generated responses, and some institutions point to existing academic integrity frameworks for guidance.
I am yet to see, however, an American university issue a campus-wide, exceptionless ban on the technology (as did Sciences Po, the French university).
It would be shortsighted of administrators to think they could keep artificial intelligence away from classrooms. The lack of total AI bans in the collegiate world indicates that AI is here to stay. There’s no alternative to grappling with it.
I worked on essays with the help of ChatGPT and can attest to its strength as a collaborative tool. It is remarkably helpful with sentence restructuring, proofreading, argument scaffolding, research and it can summarize long articles, distilling them into their most important points. I’ve also seen it hallucinate, referencing scholarly journals and articles which do not exist.
ChatGPT is also adept at coding and math, and I’ve been dumbfounded to see how quickly it is able to solve coding prompts that would have taken a human hours, in just minutes.
Even given the need to closely fact check results, it’s clear AI platforms will dramatically increase efficiencies for both students and teachers, hopefully allowing them to spend less time on menial tasks, and more time on intellectually stimulating ones.
The plagiarism issue is more sensitive. The grading system at the heart of the modern higher ed experience requires professors be able to judge the work of individual students accurately and compare them against one another and accepted standards. Widespread cheating would shred that system. It’s no wonder ChatGPT unnerves classroom professors.
Tension over plagiarism is nothing new—whether it be from a friend, a parent or an online service, students have always found opportunities to turn in the work of others as their own. But given only vague guidelines, the easy availability of ChatGPT increases that temptation. Professors will find the job of fostering academic integrity in their classrooms that much more complicated; many have already turned to AI detectors, which have so far proved faulty.
At the urging of the White House, tech companies including OpenAI recently signed a voluntary agreement to, among other things, include watermarks on AI-generated content. But that alone will do little to curb illicit uses. Until dependable detection technologies are developed, or universities issue clearer guidance, this tension will linger.
Maybe it has no solution. New chatbots are produced every day, and with every iteration of ChatGPT, the model gets smarter, built on much larger language databases. Maybe this will become a perpetual battle between students trying to outfox teachers with smarter chatbots, and teachers responding with more accurate detection technologies. The work teachers assign may itself have to change to accommodate the new capabilities students have at their disposal.
It’s far too early to make definitive predictions. But it’s a safe guess that artificial intelligence is no death knell for education. Instead, like COVID, it will remain a presence in the lives of professors and students, transforming the college experience in ways both large and small.
Eli Mundy is an Executive in Brunswick’s New York office.