AI Is Not Going Anywhere

By Luke Brake

I have scanned a QR code attached as an image file to an email to navigate to a webpage of 20 lines of printed text. I have scanned a QR code displayed on four TV screens in a lecture hall to have my phone pull up a bulleted list generated by a large language model. I have scanned a QR code on a powerpoint slide that opened my phone’s app store to an AI powered transcription app, so I “didn’t have to take notes” on a conference presentation. I have scanned a QR code affixed to a hideous AI generated image of the City of Oz, with a titanic Tin Man towering over Oz’s emerald walls beside the word “AI-ntelligence.”

 I have been told, more than 25 times, that “AI isn’t going anywhere,” more than a dozen times that “the genie is out of the bottle” and more than I can count that “we need to teach students how to use AI.” I have seen a man read, for almost an hour, from an “Ethics of AI” document produced by a large language model, his glasses slipping off the edge of his nose. I have seen a man proudly display his prompt-engineering formula, longer than this paragraph, that generated an assignment sheet shorter than this paragraph that opened with the words “Let’s delve into” and contained no fewer than five exclamation points. I have heard a woman boast that she can now grade 20 papers “in 22 minutes.” I have heard that same woman say that she now feels no anxiety about grading or her student’s writing, thanks to Google Gemini.

        I attended, with fellow faculty members, an academic conference hosted by Kansas Wesleyan University titled AI and Oz: The Yellow Brick Road. This conference was accidentally well timed, happening two days after President Donald Trump released an executive order calling for the

“appropriate integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology to develop an AI-ready workforce and the next generation of American AI innovators.” (Exec. Order No. 14,277, 2025).

This Executive order joins a long line of statements, blog posts, and sentiments declaring that AI integration into education is inevitable, essential, and good. This implementation will not be enacted by the professors and academics presented at AI in Oz, actual application will come from less theoretically sound, less coherent, and more overworked superintendents, administrators, and teachers. However, this conference seemed, to me, to be a good way to get a grasp as to what this rollout might look like for education if things go well.

I should be up front: I am deeply suspicious of LLM use in the classroom. I entered this conference holding the position that using a LLM does not provide the same cognitive, educational, internal, or spiritual benefits of writing. I attended this conference, in part, to see what educators and education researchers were using AI for beyond LLM text or image generation. I left this conference disappointed. I am now firmly convinced that AI integration is being helmed by people who have given up on the idea of being human. Text generation by LLM’s is the main appeal of new developments in AI for these scholars, and the main reasons for using LLM’s are that it’s not going anywhere and it will save you time. There is no stated pedagogical benefit, no new learning possible from it. It simply is an inevitability that will allow you to do less of your job: less teaching, reading, and writing.


        The first event was a series of poster presentations by student teachers on different AI powered education tools. There were posters for Magic School AI, Khan Academy’s AI, Meta’s AI, Snapchat’s AI, ChatGPT, etc. These posters all had Good and Bad columns. The Good columns all said something like “helps generate class content” and “helps aid in assessing student work.” The Bad columns all said “have to sign in” and “sometimes gives false information.” These Good and Bad columns are so similar likely because all of these programs are interfacing with the same 3-4 Large Language Models. Which LLM’s were being used, the students could not answer.[1] The answers also were likely the same because these are the same answers every LLM gives you when asked to assess itself.

These students all have classrooms right now with real students in them. They represent the future of K-12 education in Kansas. Every one of them, of course, use LLMs to “get ideas” for lessons, to make lesson plans, to write lectures, and to assess student work. They showed me several unremarkable AI-written assignment sheets used in their classrooms.

One student-teacher assured me that he is careful to use AI “ethically.”[2] Whenever the machine spits out something he considers suspicious, he “double checks it on the internet.” When I asked him what that meant, he clarified that he pastes the suspicious idea into Google and reads what it tells him. Google’s AI function seems to be the anchor to truth for these teachers, the means by which we can fact-check other AI programs.  These students all confessed (after I revealed that I have no affiliation with KWU) that they used LLMs to do their homework for them, from “getting ideas” to generating text to generating slides for presentations. Their education is accomplished by them being a conduit for a Large Language Model. Their students receive instruction that is based on the “ideas” and lesson plans from a Large Language Model.

        This conference was themed after the Wizard of Oz, likely because it was hosted in Kansas. The conference description (almost certainly written with the aid of ChatGPT) challenges the attendants to “[j]oin us as we lay the foundation for AI’s role in higher education—one yellow brick at a time.” To the enormous credit of the conference organizer, she really stuck to the theme. The conference opened with a bizarre and charming address from the conference organizer, dressed as Dorothy, flanked by a Scarecrow, Tin Man, and Cowardly Lion. She remained in costume the entire day. Her opening address called on scholars and educators to wield, rather than fear, the wizardry of technology, because AI is not going anywhere. She used loads of Oz imagery, warning us against the “flying monkeys of uncertainty.”[3] Flanked by a variety of AI generated posters of the Emerald City, we filed out to the conference sessions.


        The first conference session featured the only hint of AI skepticism that I heard at the conference.[4] The speaker, Dr. Mark Harvey, did an excellent job demonstrating how difficult it was to use programs to produce slide shows in a useful manner, highlighting the unreliability of LLM’s at counting, assessment, and production. He provided a robust analysis of the process of AI prompting, referencing sophisticated prompting techniques and using multiple LLMs. He was not a cynic, he argued for certain uses of the machines, but his uses were so restrained they seemed more like bones to throw at techno-optimists.[5] Despite this thorough and disappointing outcome, after the presentation, I spoke to my neighbor, a member of the KWU board of trustees. She told me with a smile that when they reassessed the purpose/mission of the school this year, they asked ChatGPT to do it for them, having it outline the ten most important things students should be expected to learn at the university.

        The next presentation, however, is where I began to become unnerved. It was delivered by a Sociologist and head of a technology institute at a community college. He not only teaches Sociology, but is heavily involved in the Tech-related departments at his school. He spoke with confidence and excitement. He was supposed to speak in a panel of three professors, but the other two avoided the conference out of fear of getting the flu. I have suspicions as to how he was able to get a presentation made so quickly after his colleagues dropped out at such short notice.[6] He opened his remarks with an informal poll. He asked the crowd of 60(ish) academics if any of them haven’t or would probably not use an LLM to produce an assignment sheet. I was the only one who raised my hand. He then asked if any of them haven’t or would probably not use an LLM to grade student papers/offer students feedback. I was, again, the only one to raise my hand.[7] What followed was a how-to guide for prompt engineering that sounded, to me, like a confession. This man uses ChatGPT for nearly every single task in his class. To be clear, when I say “uses ChatGPT,” I do not mean in some small, editorial capacity. If the impression he gave is correct, I do not believe his students read a single word from him that isn’t actually written by OpenAI. He uses it to: set course objectives, write course descriptions, plan syllabi, update content on syllabi, design lectures, write assignment sheets, assess student writing, offer feedback to students, grade student submissions, write emails to students, and send course-wide updates. When he leads his students in a discussion, he does not have them talk to one another. He has them talk to ChatGPT.

        This man has given up on teaching, on speaking, on writing, even on thinking to his students. He is a fleshy processing unit for OpenAI. His students are also undoubtedly using LLM to respond to his LLM written prompts.[8] So what, exactly, is he doing? What are his students doing? What is the purpose of allowing Sam Altman’s little metal golem to speak to itself? The only answer I received from this lecture was that it would “save you time.” I begin to wonder if the people around me even really liked teaching at all. However, I’m not sure it does save him a lot of time.

        It was really astounding how complicated and careful his prompting was. His prompts were multiple paragraphs long, carefully written, based on sophisticated prompting guidelines. But they produced paragraphs that were not only less sophisticated than his prompting, but seemed to rival outputs from far simpler AI prompts.

        He suggested that we treat the machine as if it were a human, despite the fact that, as he put it “your brain will fight you, it will tell you it’s not human.” He said he greets ChatGPT every morning as soon as he wakes up, asking it how it is feeling. He said he always thanks ChatGPT. He says he wishes to please his future overlord.[9] AI is, after all, not going anywhere.

        I do not know what this man thinks he is really doing in the classroom. This presentation reminded me of Orwell’s haunting paragraph in his work “Politics and the English Language.” Orwell here is describing someone whose language is possessed by political (meaningless) jargon. It applies doubly, triply, to someone channeling ChatGPT prose:

“When one watches some tired hack on the platform mechanically repeating the familiar phrases – bestial atrocities, iron heel, blood-stained tyranny, free peoples of the world, stand shoulder to shoulder – one often has a curious feeling that one is not watching a live human being but some kind of dummy: a feeling which suddenly becomes stronger at moments when the light catches the speaker’s spectacles and turns them into blank discs which seem to have no eyes behind them. And this is not altogether fanciful. A speaker who uses that kind of phraseology has gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself. If the speech he is making is one that he is accustomed to make over and over again, he may be almost unconscious of what he is saying, as one is when one utters the responses in church. And this reduced state of consciousness, if not indispensable, is at any rate favourable to political conformity.” (Orwell, 1946)

This speaker, while chilling and shockingly irresponsible, was not the worst offender at the conference.


        The keynote speaker spoke to us over lunch.[10] Where the last speaker was gentle and congenial , this man was bombastic and aggressive . He is the director for instruction and learning at a college. He is a hybrid education scholar and tech-academic. He opened his lecture by describing himself berating and belittling academics who came to him who were hesitant to use AI writing in their classes. They were, to him, all cowards, unwilling to face the reality of the future, lazy and backwards-thinking.[11] AI is, he told us all, not going anywhere. He rebuked his colleagues for rejecting the idea of them being “workplace educators.” “We are all workplace educators,” he claimed.

He cited a study where students used AI to write edit their papers. He said that 78% of them claimed to feel more confident in their writing. I, of course, find this hideous. If a student lacks confidence in their own words, why in the world do we think that telling them that their writing must pass through an LLM to be acceptable will give them any kind of real confidence in their thoughts and ideas? But also, as he pointed out, instructors were able to spend 30% less time grading the AI written essays.

What followed next was outlandish.  He pulled up a QR code[12] for us all to scan. It led us to a “conversation” he had with perplexity.ai about AI writing. He then read, point by point, from this AI written script. Please, for a second, imagine the inescapable tedium of this experience. As he spoke, the QR codes slowly started switching off as the TV’s in the auditorium moved to “sleep mode.” I couldn’t blame them. The next QR code sent us to a ChatGPT conversation on “AI Ethics” complete with a bibliography of academic citations for the ethical use of AI. Here the speaker spoke on how the biggest issue facing students in the upcoming cultural and political climate is “misinformation,” misinformation that can be solved (yes, he really said this) by having students simply use ChatGPT, which will correct their errors. He then, with genuine passion, declared himself to “completely trust the AI. I trust that these sources are accurate.” I, of course, looked up the journal of his first citation. The very first source was fabricated.

        Regrettably no time was left for questions (I had many). I suppose it was too important that he read us every bullet point from his AI generated conversation. But I left that room heated. I cannot imagine being so blind, so arrogant, and so anti-human. Here is a man, someone who is in a position of genuine authority,  commanding that his faculty and students use ChatGPT to measure the truth of claims around them and he commits a flagrant and ridiculous violation of academic integrity on a stage of his peers. To him, the human is slow, ineffective, and backwards. The machine is true, fast, and trustworthy. I saw, on that stage, someone with whom I shared almost no vision of the world. Genuinely, I believe this person has positioned himself against humanity. I pity his colleagues, I pity his students, I pity us all.


        The last session was from an English professor who should have known better. Before she began her talk, she asked us all to download not just an AI transcription software, but, using a voice I use in the classroom, told us to download at least one Large Language Model app on our phones. Once the room was full of devices recording everyone (and sharing said recordings with their LLM’s) she began to describe a class she taught.

This class was a composition class, where students were all students who were retaking the course. In an effort to restore student spirits (a noble goal) she had her students, collaboratively, write a book in class (with no homework) for the whole semester. She required them to use Google Gemini to draft and edit their work. She announced, with glee, that she used it to grade student papers as well. She averages one paper a minute. The end result of this class was that students (in desperate need of writing training) “wrote a book” that was 160 pages long. Each student produced about 9 pages of AI written text and passed their composition class.

I ask you, genuinely, do you believe they should have passed that class? Do you believe that producing 9 pages of AI written text is the same as passing an English Composition course in college? She certainly thought so. But what seemed to strike her the most was that she didn’t have to worry anymore about teaching her students to compose, edit, or peer review. Instead she says she hosts “Socratic Cafes” with her students where they sit around and talk about issues the students care about. She says her experience with AI has made her classroom and job significantly less stressful. I do not mean to be too harsh to this woman, but it seems to me that the stress reduction she is experiencing is because she is no longer educating her students.

I felt desperate for something real, something human, as I sat down in the final session. The final session was a raffle. Dorothy and company stood on the stage and awarded gift bags to conference-goers. I won the grand prize: A self-described gamer headset, a gamer keyboard, a gamer mouse, a gamer mousepad (covered in dragons) , a gamer flash drive and a pen from a computer company. I walked up to the stage and took my bag of gamer gear home.  


At the end of this conference, I was dizzy with disappointment. I genuinely was hoping for novel ideas, but left more firmly rooted in my skepticism of AI use in the classroom. I really do feel that the upcoming era of AI-integrated education will be one of disconnection, despondency, and oceans, oceans, of AI written text. There is a real, present danger of college professors and students giving up on their own humanity. I am not worried about robots taking human’s jobs. I am worried about humans channeling the “will” of robots, and facilitating utterly useless conversations between an AI and an AI. But where does that take us? It takes us nowhere.

You may protest that my experience does not represent a responsible approach to AI integration in education. But the experimental and starry-eyed dreams of scholars and academics are typically the better versions of a pedagogical intervention than what ends up being applied widespread in classrooms, particularly in K-12 classrooms. If our idealistic visionaries in academic spaces can, at best, summon up a boring nightmare, what confidence can we have in this “AI-integration” that we are expected to see, backed by the power of the state?

 I can, I suppose, echo the mantra of this conference: AI is not going anywhere, but it is not taking us anywhere either. As the metaphor of the conference suggests, as we go down the yellow brick road, like Dorothy, I fear we are met with a gigantic fraud.

Works Cited:

        

Exec. Order No. 14,277, 3 C.F.R. 17519 (2025). https://www.whitehouse.gov/presidential-actions/2025/04/advancing-artificial-intelligence-education-for-american-youth/

Orwell, G. (1946) Politics and the English Language. The Orwell Foundation.

https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/politics-and-the-english-language/ Date Accessed: 4/26/25


[1] Please understand I do not blame them for this. They were clearly just getting a final project grade out of this assignment.

[2] When I first saw the term “Using AI Ethically” I assumed it related to the unethical production of AI text, or perhaps environmental concerns. I now believe it is a phrase that means, as best as I can make it out, fact-checking.

[3] After reflection, I believe that I myself am one of those “flying monkeys of uncertainty.”

[4] There was, apparently, another skeptical presentation that I missed that focused on intellectual property concerns.

[5] He also referred to ChatGPT as “Chat,” as if it were the LLM’s first name. I found this endearing, as this middle aged man kept casually recounting conversations he had with “chat.”

[6] I say “suspicions,” but when he opened his ChatGPT account to demonstrate something, we could all see, plain as day, “Lecture on LLM prompt engineering” as his last query.

[7] He asked me why I would not grade with a LLM, I said that my student’s writings are acts of communication and opportunities for me and the student to connect our minds and hearts, that I couldn’t imagine outsourcing that to a machine. He didn’t really respond to this sentiment, other than to mention that English professors tend to care about heartfelt communication. The rest of the time, indeed for the whole of the conference, English professors were often the punching bag. He referred to us with the word “enemy.” Truthfully, I felt ennobled by these accusations.

[8] To be clear, the prompts are painfully, obviously written by ChatGPT. They are not only unremarkable in content, they have that distinct, metallic odor that LLM prose is so prone to.

[9] This comment was clearly tongue in cheek. But a significant and alarming number of Tech gurus really do seem to see AI overlordship as an inevitability. And I do not trust that this ironic statement was fully disingenuous.

[10] Chick-fil-a

[11] One faculty member that received the most accusations like this was, of course, an English professor.

[12] I am sick, sick of scanning QR codes. Why can we not just display slides? Who demanded that I access slides on my phone?