The people next to you in class — those who sit among you, presenting as normal students — are using artificial intelligence to navigate an opportunity that they pay thousands of dollars a year for. The growth of artificial intelligence (A.I.) and generative chat-style models, such as ChatGPT, have been exponential in the last few years. However, instead of contributing anything meaningful to society, these models are being used in less-than-10 people politics seminars to understand the differences between Republicans and Democrats. So, what happens when youth get their hands on these A.I. tools? Using online resources to help with schoolwork has been a thing since the conception of the internet. Yet, ChatGPT is being used beyond this general purpose — it has the ability to become anything you could ever imagine or need. In particular, the rise of A.I. in academia has plagued media literacy and the desire to critically think. Why would you need to think when A.I. can do it for you? From composing a grocery list to writing your upper-year final paper for you, A.I. has gone beyond its initial boundaries as a mere tool.

When ChatGPT is used in the humanities and social sciences to do work — replacing the human perspective that makes these fields so important — we face an increasingly digital age plagued with disinformation that is absorbed at face value. Human touch is lost for the sake of ease. Speaking in The New Yorker about “Why A.I. Isn’t Going to Make Art,” Ted Chiang said: “If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland.” The work generated by these models is not even good, hence the term, ‘A.I. slop’. When generative A.I. makes its best assessment of literature, politics, and even philosophy: it has no standpoint for what makes these arts so rich and engaging that they have survived most of human history.
The essential pathos and embodied nature of our existence that draws students’ interest to the humanities, and often drives marginalized people into these academic circles, is at risk of being totally lost. A study from KPMG, an advisory firm, that sampled over 2,500 Canadian students showed that 60 per cent are currently using these forms of generative A.I. to help their schoolwork. Chiang’s article puts it succinctly, “language is, by definition, a system of communication, and it requires an intention to communicate.” Putting aside our intentions to learn, our fears of being wrong, and hoping to educate ourselves when going to an academic institution is inherently paradoxical. Although some will claim their use of A.I. is nuanced and not meant as a replacement, the same study remarked that out of those who use A.I., over-two-thirds did not think they were learning or retaining as much by using those means. Not only is the use of A.I. among students clearly lazy, it does not even make you feel good afterwards. Devastating the environment by constantly querying ChatGPT in order to barely pass ECON-1011 can not be the only way forward.
Like any bad habit, the worst part is trying to quit. By opening ourselves once again to trying – to think, to be right, to be wrong – we mitigate the core need for generative A.I. There is no need for instantaneous completion when we learn to enjoy the processes of critically thinking and developing unique ideas.