Amy Pistone

Classics Professor at Gonzaga University

What is AI and what it is to educators?

I’ve been reading and thinking a LOT about AI lately, especially in the context of teaching, and I have plans to write about some more of my thoughts in the not-too-distant future. 

Before we can really talk about AI though, it’s worth making sure we’re clear on some terms. I say this because “AI” is a very buzzy thing right now, which means that everything is AI these days. Some of those things feel a lot closer to what was described as “the algorithm” a year or two ago and it doesn’t really help anyone have a nuanced discussion of what AI can or can’t do if we aren’t even clear on the terms of that debate.  

To borrow the definition that Emily Bender and Alex Hanna use:

To put it bluntly, “AI” is a marketing term. It doesn’t refer to a coherent set of technologies. Instead, the phrase “artificial intelligence” is deployed when the people building or selling a particular set of technologies will profit from getting others to believe that their technology is similar to humans, able to do things that, in fact, intrinsically require human judgment, perception, or creativity…The set of technologies that get sold as AI is diverse, in both application and construction—in fact, we wouldn’t be surprised if some of the tech being sold this way is actually just a fancy wrapper around some spreadsheets.

Having seen multiple ads for “AI” things that have been functionally available well before the recent explosion of AI, I have to admit that I’m pretty sure a lot of companies just repackaged their algorithms and called it AI to get on board the hype train (I’m looking at you, Spotify!). But that’s not entirely the point here.

So, for the purposes of this, I’m mostly talking about what is often called Generative AI and is based on Large Language Models (LLMs). A particular subset of LLMs are Generative Pretrained Transformers (GPTs), an abbreviation probably most familiar from the ubiquitous ChatGPT. Things like Anthropic’s Claude, Google’s Gemini, OpenAI’s ChatGPT, and Twitter’s Nazi-prone Grok all fall under that category of a chatbot based on a large language model. (It’s a level of detail that’s not particularly important for our purposes here, but to make things even more annoying, many of the LLMs/GPTs have functionally the same name as the chatbots that use them – so Google’s Gemini chatbot is based on the Gemini language model, which are technically two different things). 

Why does all this matter? Well, we can’t really talk about what “AI” can and can’t do if we aren’t clear on what it even is. There are plenty of other things out there that are not based on this general model and many of them are also being called AI. I’m really talking about the basic, ChatGPT-esque systems here, though there are a few others I plan to write more about later that are (to my mind) similar but different. For ease of discussion, I’ll call these AI, but supply your own scare quotes around that because there should be a certain skeptical and maybe even snarky attitude around my every use of the term. 

Because I’m nothing if not extremely fair, and most of this will be criticism of the current obsession with integrating AI into everything, I’ll start with what AI can do. 

What AI Can Do

  • AI can be a good tool for sorting through massive amounts of information – less so in terms of doing their own queries against datasheets, but they can reliably write a program that can do a query like that! AI can perform simple tasks at scale pretty well. (Can we do a lot of that in relatively efficient fashion with other tools and a little more work and time? Sure, but we’ll leave that to the side for now)
  • Sentiment analysis – AI can read a crapload of reviews of a movie and tell you if the general vibe is positive or negative
  • AI can synthesize a Google search quickly, skimming millions of pages and pulling out common themes and things that show up most frequently
  • Summarize a text that you don’t want to read – will it be a good summary? Not important, it will be a fast summary. It can also generate questions based on that text. Are they good questions? Well, again, “good” is relative and “fast” is fairly undeniable
  • Generate a list of names for your fantasy football team or people/places in the novel you’re writing. Or any other number of tasks that you want to outsource to something other than your brain. Can it think up ideas? Some of this is semantics (what does it even mean to “think up” something?), but I would say no. But it can survey a large swath of the internet (and any other materials it’s trained on) and come back with some stuff it found.
  • AI can make pictures to order (and more or less immediately and for free, with a large asterisk on that “free” part, which I’ll discuss below)
  • Menial tasks that take a lot of boring actions – AI can make a basic template based on something you’ve created and, say, change all the dates so that the class schedule you made for 2024 is now accurate for 2025’s calendar
  • I’m sure there’s more, I guess I could ask ChatGPT to give me some more to include here as a cute bit, except that’s entirely antithetical to everything I’m about to write below, so….

I’ve omitted most of the snarkier comments that my brain generated along the way (“AI can plagiarize many times faster than any human can!” “AI can cause people to question their realities and harm themselves and others!”), but if you are the kind of person who chafes at a list of things AI can do, please rest assured that I thought a lot of extremely snarky things. 

My love of parallel structure is begging me to start a list of “What AI Can’t Do” now, but I actually think it’s more salient to talk about why the above list isn’t, for me at least, a compelling reason to integrate AI into every aspect of my life. I’d be lying if I said I’ve never asked ChatGPT to help with some mindless tasks. For example, I wanted to theme the grade categories on an assignment for my Ancient and Modern Sports class around different sports and there are some sports I know embarrassingly little about (baseball) and even beyond that, I was struggling (dare I say, striking out) to come up with enough category names for my rubrics and grading scales. I’m not proud of this, but in the interest of transparency, I do need to admit that I have used AI for things that a quick Google search or a text to some friends could have solved. We all have moments of weakness. 

So why is this nifty little time saving device something that I have pretty strong, mostly negative feelings about? Well, I do not think that AI in its current form(s) is ethical and I do not think that a fundamentally unethical technology has any place in my classroom. 

  1. For the time being, the resource consumption is unbelievably high. It would be one thing to have my Google searches be a little more customizable and be able to make them a little more savvy about the results they turn up (yay, fun!), but the energy consumption is catastrophic for all of these systems. I cannot look myself in the mirror and say that I need an easy-to-generate list of names for my grading scheme more than the world needs to stop heating up. I just can’t. The other things below are issues that don’t apply equally to every so-called AI system, but when we are in an ever-worsening climate crisis, I just don’t think there is an ethical argument in favor of AI, given the environmental costs. [insert stats on this here].
  2. Most of the LLMs that are widely used have been built on exploited labor. If you try not to buy things made in sweatshops or from forced labor, then it’s hard to make a strong claim for why AI is different or better. 
  3. The most popular and profitable AI systems are an absolute nightmare in terms of intellectual property. If you believe that things like citation and recognizing/rewarding people for their creativity, labor, and intelligence are important, then there are some really substantial issues at the heart of something like ChatGPT. There’s more nuance that’s worth going into here (see future blog posts!), in terms of what intellectual property is and whether we should believe in controlling and monetizing ideas, but it’s pretty undeniable that these LLMs were trained on people’s work and ideas and the people in question did not receive any sort of credit or compensation. To compound the problem, every time we use these tools, we provide additional data for tools that are (often) being used nefariously, not to mention that our intellectual property (whether it’s lecture notes for courses, assignments we’ve created, or any number of resources we have created and disseminated to students) is being fed into these machines every time a student asks ChatGPT to complete an assignment. 
  4. They didn’t make the cut for the top 3, but we should also talk about the expansion of the surveillance state, the erosion of privacy, the unquestioned perpetuation of ideological biases found in the training materials, and the general enshittification of the information environment caused by an internet that is increasingly flooded with AI slop of all sorts. These things are all antithetical to the ideals that most of us believe in and champion, in our classes and in our broader communities and (faltering) democracies. 

As a counterpoint to these, I could pretty easily craft an argument about how (1) airplane travel is also terrible for the environment and (2) basically every store, industry, or technology is also built on exploited labor in one way or another (I promise that I do have a version of this for #3 above but it’s long and not terribly pithy). It’s very easy to make a “whataboutism” argument against any ethical stance anyone might take, and I do fly a lot more than the average person and I haven’t stopped, e.g., wearing Nikes or playing video games just because I know about the awful working conditions that allow me to do so. It’s a big, globalized world and we’re all complicit in a lot of harm, and none of us can in fact abstain from all AI use when it’s integrated into so many things right now (Google searches and Zoom meetings and and and…). I personally am trying to minimize my engagement with AI in the same way I try to recycle and not to shop at Amazon, because those are the kinds of ethical stances that I have decided are meaningful to me. But I’m not even really here to tell anyone not to use AI at all so much as I want to be engaged in conversations about what educators can and should do as we face an onslaught of AI-related issues in the classroom. 

I’m going to be blogging a lot more about this topic, in large part because I have spent the last year or so reading a lot about AI in the educational context. I have a lot of thoughts about AI and pedagogy and what “marketable skills” look like and how much the mass-implementation of different AI, ed-tech products is pretty high up my list of things that I hate (and there’s a long list these days, I assure you). This is a preliminary list of some things that I think are foundational to any discussion of AI and teaching (and, self-interestedly, I’m running An AI Workshop for People Who Hate AI for the Women’s Classical Caucus a few days from the time of writing this, and I wanted to off-load a lot of the prefatory and background information here so that I would have more time to talk about other things).

Some of the things that I am thinking about and sort of planning to write about include:

  • Why can’t AI actually help with the learning process? Can’t it serve as an individualized tutor and democratize access to personalized tutoring?
  • About a million different iterations of “what is learning and why isn’t it what AI and all the related ed-tech tools seem to think it is???”
  • If I think that the job of humanities/liberal arts education is to help students navigate their worlds (which I do), and AI is undeniably infiltrating all aspects of life in the United States, can I really refuse to engage with AI? 
  • And the closely related question of, given that employers allegedly want AI skills in job applicants, how do my own ethical opinions weigh against my ostensible responsibility to help prepare my students for jobs to pay off their student debt?
  • Isn’t AI just another new technology and haven’t all new tools caused moral panics about how they would be the end of the world, and haven’t most of them been fine? Why is this different than Plato complaining about how the invention of writing was bad or any one of the million similar tech panics?

If you’re interested, I have been (and will continue to) compile a bibliography of writing about AI that I think is valuable. If you have things you think should be added, I would love for you to share them! 

Leave a Reply

Your email address will not be published. Required fields are marked *