The Anti-Dystopian's Guide to GenAI for students & educators
What is GenAI, Why is it Bad, and How Can Higher Education Resist It?
Hello and happy Friday everyone! This week, in lieu of a new podcast episode, I present to you the Anti-Dystopian’s Guide to Generative AI. Over the past weeks and months (or perhaps ever since ChatGPT launched) there have been concerns about how GenAI will change higher education, how students and educators might be using it, and what the university’s response should be.
I know from my personal experience that both students and academic staff may be using these tools, but that doesn’t always mean they understand the technical or corporate architecture underpinning it. With this in mind, I’ve put together a handy guide for thinking about GenAI: what it is, why it’s bad, and how we can resist it.
This guide is made with educators and students in mind. Please feel free to use and adapt it or parts of it for your own use, either has handouts or presentations for students or even for committee sessions. My hope is that it will demystify a lot of the hype and obscure technical language that often makes discussions of AI unintelligible. (My saying is that if you can understand and research extremely niche Ancient Roman corn laws—or whatever your field— you can understand AI.)
For the Word or PDF versions of this document, click here or here.
The Anti-Dystopians’ Guide to Understanding AI
Understanding Tech-Speak: Key Terms
Algorithms are a finite set of mathematical instructions or specifications that perform calculations and data processing, usually with a defined end point.
Artificial intelligence (AI) is an umbrella term (and often imprecisely used by public or media) to describe computational systems that mimic tasks associated with human intelligence. AI as an academic discipline has existed since the 1950s. Despite its name, AI tools cannot actually ‘understand’ or ‘reason’ as brains can. Some examples of AI tools include spell check, voice assistants like Siri or Alexa, Google maps, autonomous vehicles or ChatGPT.
Machine Learning is a field of artificial intelligence (AI) studies in which algorithms can learn from and improve themselves through training on data sets, and therefore improve or ‘self-teach’ without explicit instructions.
Neural Networks are a type of computation model inspired by the structure of neurons in brains, and are a machine learning model for probabilistic pattern recognition. They utilize a variety of algorithms to (pseudo)-predict the probability of an event. Large language models, like ChatGPT, are a type of neural network.
Generative AI is a type of AI which can generate images or text. GenAI differs from Extractive AI, or algorithms that analyze data for patterns and extract key information.
Example: The original Google search is a type of AI for information retrieval in which the algorithm analyzes and ranks a list of website pages according to their relevance with a key search term. Google’s new AI search is a form of generative AI which generates text in order to approximate a response to a user query, extrapolating from data the model had previously been trained on. Unlike extractive AI, generative AI is not designed nor guaranteed to provide accurate information (internal Chat-GPT documents allegedly estimate a 50% error rate for short-form factual queries) and is at least 10x times more energy intensive.
Large Language Models (LLMs) are a type of machine learning model for natural language processing. They are trained on truly enormous data sets, mainly generated from (mostly copyrighted) web-data and can subsequently generate text that mimics or models natural language capabilities. LLMs, like other kinds of language models, are trained to predict the likelihood of a ‘token’—such as a character, words or sentences—based on proceeding or surrounding context, like a user query. Some example of LLMs include OpenAI’s ChatGPT, X’s Grok, Google’s Gemini or the Chinese company and model DeepSeek.
ChatGPT (generative pre-trained transformer) is a generative AI chatbot based on large language models developed by the company OpenAI which can generate responses to user queries that mimic or approximate human conversational language.
OpenAI is an American organization, led by a non-profit corporation but with multiple for-profit subsidiaries. Its original co-chairs were Sam Altman and Elon Musk, who subsequently had a falling out, and many of its original founders and investors have ties to far-right figures and organizations.
Major Issues with Generative AI (Outside of Education)
The hallucination problem: LLMs are not designed to produce accurate information, but to mimic human language and therefore can never be “right”. Some scientists have called this the ‘hallucination problem’, because of LLMs’ tendency to make up facts, statistics or work that do not exist; whereas others have called for LLMs to be understood as “bullshitting, in the Frankfurtian sense” (Hicks et al, 2024). As AI expert Dan McQuillan (2023) has argued, “Despite the impressive technical ju-jitsu of transformer models and the billions of parameters they learn, it’s still a computational guessing game . . . If a generated sentence makes sense to you, the reader, it means the mathematical model has made a sufficiently good guess to pass your sense-making filter.”
A study published by Columbia Journalism Review found that GenAI chatbots “provided incorrect answers to more than 60 percent of queries. Across different platforms, the level of inaccuracy varied, with Perplexity answering 37 percent of the queries incorrectly, while Grok 3 had a much higher error rate, answering 94 percent of the queries incorrectly” (Jaźwińska and Chandrasekar, 2025).

The environmental problem: LLMs and GenerativeAI use multiples of more energy and water than other kinds of AI tools. LLMs essentially work through sheer computing power and scale, as opposed to more efficiently designed extractive AI tools. As MIT researcher Noman Bashir noted, “a generative AI training cluster might consume seven or eight times more energy than a typical computing workload.” GenAI require energy for training each model as well as each time a user queries it. This includes both electricity for compute as well as fresh water for cooling servers in data centers.
It is difficult to accurately estimate how much energy ChatGPT and other LLMs are using since there are no reporting requirements. A recent study estimates that AI usage accounts for almost 20% of data center power demand, and is projected to double by the end of the year. This year, AI will consume 82 terrawatt-hours of electricity, or about the same as the annual electricity consumption as Switzerland (Taft, 2025). In a 2021 research paper, scientists from Google and the University of California at Berkeley estimated the training process alone generated about 552 tons of carbon dioxide (Zewe, 2025); and about 700,000 litres of water was used to cool the machines that trained ChatGPT-3 at Microsoft’s data facilities (Mazzucato, 2024). A Guardian investigation showed that the collective emissions of the data centers controlled by Microsoft, Google, Meta, and Apple were 662% higher than what the companies claimed, and big tech companies have essentially abandoned their climate goals with the surge of GenAI (O’Brien, 2024).
The impact of these data centers usage predominantly falls on vulnerable communities. For example, the predominantly black communities in Memphis, Tennessee who live near the data facilities for Musk’s xAI facility have suffered higher levels of air pollution and environmental contamination as the facility spews nitrogen oxides at an estimated rate of 1,200 to 2,000 tons a year (Wittenberg, 2025). In Oregon, local communities found that a secretive Google data center was guzzling 29% of all water used in the city (Marx, 2024). And tech companies have built data centers in the Chilean desert in communities already experiencing drought (Urquieta, 2024). It is estimated that the data center demands of GenerativeAI will only grow, with data center emissions globally projected to accumulate to 2.5bn metric tons of CO2 by 2030 (O’Brien, 2024).
The “lock-in” problem: ChatGPT and other generative AI tools are highly subsidized at the moment, and even paid subscriptions do not reflect actual costs of the technology. This is because the adoption of AI tools inevitably results in higher computing usage, which increases companies’ and organizations’ cloud computing bills. The cloud computing industry is dominated by a handful of companies (AWS, Google, Microsoft, etc) who are also the primary investors in generative AI. It is likely that as organizations fire or dismantle existing human systems to adopt generative AI tools, they will become ‘locked in’ to these services and the price will subsequently increase.
The copyright problem: OpenAI and other companies trained their LLMs on copyrighted content, including those of artists, journalists and academics. There are currently many lawsuits being prepared, including by academics and publishers, against these AI companies for the use of their books and other material in training data. The US Copyright office recently announced that the use of copyright materials to train AI models was not considered fair-use.
The privacy problem: LLMs cannot guarantee privacy, and information inputted into chatbots may resurface in future queries. For example, an NHS hospital was reprimanded by the Information Commissioner's Office (ICO) for sharing patient details with Google's artificial intelligence company DeepMind (Burgess, 2017).
The dangerous information problem: Because they are being promoted so aggressively, the public is not aware that the information produced by LLMs cannot be trusted. This has resulted in being given inaccurate or dangerous information, such as encouraging users to commit suicide, assuring them that medication or materials are safe to ingest, or even providing false information for critical tasks such as how to file taxes or investment advice. It has also spurred the use of deep fake images and information, further contributing to the proliferation of false political information.
The labor problem: The data for training and output of LLMs always needs human intervention to tag or monitor the systems. AI companies have outsourced this labor to exploited workers, largely in the global south. For example, Time Magazine found that “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” and many of these individuals developed PTSD due to the violent nature of the content they monitored (Perrigo, 2023).
The workplace problem: AI is being adopted by private and public industry for tasks it demonstrably cannot do in order to displace human workers and employees and replace them with technological systems it can control. AI is being used as a form of outsourcing which removes humans from decision-making or creative capacities in all industries, from science to music, resulting in a labor force that are more vulnerable, exploited and precarious.
The political problem: AI is increasingly being adopted by governments and organizations as a form of ‘technological solutionism’ to paper over the privatization and destruction of public infrastructure, from the partnership between NHS and Palantir, to deploying ed-tech in classrooms, to Elon Musk’s DOGE using AI to dismantle US government institutions. Wherever there is a problem that AI is purported to be able to ‘solve,’ there is almost always a group of people who already know how to do it better but have been chronically underfunded—whether they are teachers, doctors or government workers (McQuillan, 2023).
Major Issues with Generative AI in Education
ChatGPT & Cheating
ChatGPT may be used for cheating (but so many things can). There are widespread fears that students will use ChatGPT to cheat on assignments. However, conceptually, using ChatGPT to cheat on assignments or exams with the explicit intention to deceive examiners is not so different than any of the other methods through which students may already choose to cheat on assignments, such as hiring others to write their essays or using Google Translate on language tests. Students, however, may use LLMs at many different points of the learning or assessment process (summarizing readings, to edit their grammar or writing, or to generate outlines) which currently seem a ‘grey zone’ and are often going to chatbots as alternatives to Google search.
ChatGPT may fuel an educational ‘moral panic’ about cheating, which may lead to the adoption of harmful surveillance tech or disrupt instructor/student relationships. Universities and higher education institutions have already adopted harmful forms of ‘surveillance tech’ against their students. AI detectors are not accurate and cannot be relied upon to identify student AI usage.
ChatGPT & Student Welfare
ChatGPT disrupts students’ learning processes. Studies have shown that ChatGPT disrupts the learning of students, who use it as a crutch in the learning process and subsequently perform worse when the tool is taken away (Bastani et al, 2025). This is true both for humanities and writing as well as STEM subjects, such as math and coding. Telling students to ‘fact-check’ or ‘verify’ what ChatGPT tells them is not a helpful framework, as being able to “spot errors” in LLM outputs relies upon them already being experts in topics or have the critical reading and writing skills we are attempting to teach them (Beetham, 2024).
Student welfare is harmed by larger AI eco-system and aggressive marketing campaigns. Students—despite being young—do not always have knowledge or understanding of what AI is. ChatGPT and or other AI tools, such as Grammarly AI, are being aggressively marketed to students, preying on their insecurities as students in highly competitive environments as well as increasingly broken job markets and socio-political systems (Beetham, 2024). Students may feel that they are unfairly competing against other students who are using AI tools (whether this is true or not).
ChatGPT makes students believe there is no point in learning writing or other skills, as AI destroys all industries. There is a sense of hopelessness about the value of learning or of university education in general, as the larger landscape of AI adoption in the workforce or industry means they are entering into a more precarious and vulnerable job market. Many do not believe their jobs—whether a journalist, musician, biologist or computer programmer—will actually exist in the future.
Re-Evaluating the Purpose of the University
Educators and instructors are also using ChatGPT in order to mark or produce assignments, or even in their own research and peer review. Students are often aware of this and it raises further questions about the role of the instructor/student relationship and the wider purpose of education.
The panic around ChatGPT in higher education has only highlighted how automated the processes of learning and teaching in universities has already become. While instructors may be horrified at students’ adoption of AI, ChatGPT usage demonstrates how automated and quantified student assessment already is. The crisis around AI presents an opportunity to communicate to students the value of learning; of reading, writing and thinking as iterative processes; and of learning how to communicate and critically analyze. Universities should seek to operate as sites of learning, rather than rubber stamp factories for customer-students to communicate they are hirable.
There are more creative ways to evaluate and encourage good student learning, which may encourage students to think critically about what ChatGPT is doing. For example, some departments are considering transitioning back to oral examinations. Instructors have asked students to conduct assignments in which the students marks essays generated by ChatGPT in order to critically analyze what the technology is doing; or to have “write-to-learn” sessions in tutorial or seminar sessions; or other exercises beyond the classic weekly essay that develops student learning as a process more explicitly (Beetham, 2024).
AI & the Erosion of the Higher Education System
AI is being used as an excuse to defund education systems, and divert resources towards technology companies. This is true for all levels of education, especially primary and secondary schools, and contributes to why students struggle with writing even before they arrive at university.
AI agents will be used to displace human teachers and academic researchers’ work will be stolen and used to train AI bots. There have already been calls for AI agents to take over as instructors or lecturers at universities. Furthermore, the adoption of AI tools may force academics to provide copyrighted material—either their lectures or research—that will be used for the training data of these AI agents.
Students, instructors and universities are not being asked to adopt AI, we are being subjected to it whether we choose it or not. For example, EBSCOHost is now automatically providing ‘AI summaries’ of books and chapters, and AI summaries or suggested responses are being integrated into tools without being asked and with no way to turn it off.
What To Do?: Abolish AI
Resist AI: It is clear that the use cases for Generative AI systems are either overstated or completely fabricated. The university should not support the funding of these companies through purchasing licenses or imposing AI adoption. GenAI companies have horrific environmental impacts, ties to right-wing organizations and politics, and are contributing directly to the destruction of public services and educational systems that the university relies on.
Engage with AI Talk: It is a disservice to students to ignore AI when they will find it in the world—we must teach and engage with students about what AI is, how it is being used and how to critically analyze and engage with it and other technological and political systems.
Advocate for Learning: Make more explicit the value that students will find at the university beyond excelling and performing well: what is the case for learning how to read, write and think.
Demand Valuable Educational Tools: Despite the fact that ChatGPT and other LLM bots ‘already exist’, this does not mean that they are the types of tools we want or prefer to have had as educators. What are the types of technology or tools we actually feel are useful to the classroom and teaching? How can we support and allocate resources to the construction of these?
Further Resources
Podcasts
Abolish AI with Dan McQuillan. The Anti-Dystopians.
Data Vampires Series, Part 1-4. Tech Won’t Save Us.
Reading
McQuillan, Dan. 2023. “ChatGPT: The World’s Largest Bullshit Machine.” Transforming Society. https://www.transformingsociety.co.uk/2023/02/10/chatgpt-the-worlds-largest-bullshit-machine/
Hicks, Michael Townsen, James Humphries, and Joe Slater. 2024. “ChatGPT Is Bullshit.” Ethics and Information Technology 26(2): 38. doi:10.1007/s10676-024-09775-5.
The Anti-Dystopians’ Guide to Critical Technology Studies, https://substack.com/home/post/p-145258859
Beetham, Helen. 2024. “Writing as ‘Passing.’” Imperfect Offerings. https://helenbeetham.substack.com/p/writing-as-passing
2024. “What Price Your ‘AI-Ready’ Graduates?” Imperfect Offerings.
2023. “AI and the Privatisation of Everything.” Imperfect Offerings.
References
Bashir, Noman, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti. 2024. “The Climate and Sustainability Implications of Generative AI.” An MIT Exploration of Generative AI. https://mit-genai.pubpub.org/pub/8ulgrckc/release/2
Bastani, Hamsa, Osbert Bastani, Alp Sungu, Haosen Ge, Özge Kabakcı, and Rei Mariman. 2024. “Generative AI Can Harm Learning.” doi:10.2139/ssrn.4895486.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada: ACM, 610–23. doi:10.1145/3442188.3445922.
Burgess, Matt. “NHS DeepMind Deal Broke Data Protection Law, Regulator Rules.” Wired. https://www.wired.com/story/google-deepmind-nhs-royal-free-ico-ruling/
Jaźwińska, Klaudia, and Aisvarya Chandrasekar. “AI Search Has A Citation Problem.” Columbia Journalism Review. https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php (May 29, 2025).
Mazzucato, Mariana. 2024. “The Ugly Truth behind ChatGPT: AI Is Guzzling Resources at Planet-Eating Rates.” The Guardian. https://www.theguardian.com/commentisfree/article/2024/may/30/ugly-truth-ai-chatgpt-guzzling-resources-environment.
McQuillan, Dan. 2022. Resisting AI. Bristol: Bristol University Press.
McQuillan, Dan. 2022. “Deep Learning and Human Disposability.” Logic Magazine. https://logicmag.io/home/deep-learning-and-human-disposability/.
O’Brien, Isabel. 2024. “Data Center Emissions Probably 662% Higher than Big Tech Claims. Can It Keep up the Ruse?” The Guardian. https://www.theguardian.com/technology/2024/sep/15/data-center-gas-emissions-tech
Perrigo, Billy. 2023. “Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer.” TIME. https://time.com/6247678/openai-chatgpt-kenya-workers/
Taft, Molly. “AI Is Eating Data Center Power Demand—and It’s Only Getting Worse.” Wired. https://www.wired.com/story/new-research-energy-electricity-artificial-intelligence-ai/
Urquieta, Claudia, and Daniela Dib. 2024. “U.S Tech Giants Are Building Dozens of Data Centers in Chile. Locals Are Fighting Back.” Rest of World. https://restofworld.org/2024/data-centers-environmental-issues/.
Wittenberg, Ariel. 2025. “‘How Come I Can’t Breathe?’: Musk’s Data Company Draws a Backlash in Memphis.” POLITICO. https://www.politico.com/news/2025/05/06/elon-musk-xai-memphis-gas-turbines-air-pollution-permits-00317582
Zewe, Adam. 2025. “Explained: Generative AI’s Environmental Impact.” MIT News | Massachusetts Institute of Technology. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117