“Writing teachers and researchers should not fear the coming swarm.” BHD

The journey of reading and thinking and engaging with GenAI continues. I think, compared to when I started blogging about this, I am more squarely in the camp of we need to be writing and researching and studying GenAI, though it is no panacea (my friend Nick used this term and I found it quite accurate).

Over these last few weeks, I’ve been reading more from scholars (Hart-Davidson et al., 2024; Pigg, 2024) who are more squarely in TPC, and that has had quite a large influence. Pigg used recordings of folks using GenAI in their writing process, which I appreciated because then she wasn’t complicit in her own use of it (I’m still struggling with the environmental impact). Hart-Davidson et al. explored how the history of TPC can inform the future, namely with GenAI. Bill HD is quoted as saying the following in his prior work:

“Writing teachers and researchers should not fear the coming swarm. As we engage these bits of code to do what is operationally necessary, we will have an expanding scope of rhetorical action to investigate, embrace, and yes, teach” (p. 334).

and…

“[Y]ou should be building robots. Robots need direction. Someone who knows writing practices and how they work in social structures must be the brains that set them in motion, tell them how to listen and how to respond, tell them when they are going too far and when they could be doing something more” (p. 334).

Walker, J. R., Blair, K. L., Eyman, D., Hart-Davidson, B., McLeod, M., Grabill, J., &
Vitanza, V. J. (2011). Computers and composition 20/20: A conversation piece,
or what some very smart people have to say about the future. Computers and
Composition, 28(4), 327-346.

This captures my sentiments really quite well. If it’s not us who will train the people who train the robots, then who? How can we give them the critical skills to engage the technologies if we refuse to engage those technologies? To this end, I really think students have to experiment and play, just as we do as their instructors. I really liked how Colby (2025) anchored her work to play in that sort of way. Learning happens by doing, and so we must do. In this case, “doing” is engaging with GenAI so that we can see where it excels, where it falls short, and every other possibility that there is.

This idea of “we must do,” though just brought up the question for me: can we adequately critique GenAI if we do not engage with GenAI itself? And I do think the answer is yes because we can draw on the studies and knowledge that exists out there, we can do similar to Pigg and look up recordings and accounts from users. Still, though, we have to be able to teach this knowledge to our students.

Over these last few weeks, I’ve been working on trainings related to integrating AI into the classroom. A few things stood out to me: I heard on multiple occasions that GenAI writing is “indistinguishable,” which I find really horrifying that folks would say this. Though, I do recognize that I am a writing teacher, and that is likely shaping my feelings (which are quite strong, I must say). Another piece that stood out to me, though, is that we should be in the business of teaching and not trying to catch out students cheating.

Whenever cheating comes up in my classrooms, whether that’s plagiarism or GenAI (typically those two since it is a writing class), I tell my students that I err on the side of trusting them. My first response is to request a conversation and ask questions like “how did this happen?” I make sure that they understand what the issue is, and then I give them the opportunity to makeup the mistake. I’ve been told this approach is too soft and students could be lying to me. Yes, they could be lying to me. But I’m not in the business of catching students lying to me. They’ve made that choice, if they have that is, and I can’t make them do anything different. Now, on subsequent occurrences, then I start to become more harsh because they’ve had the opportunity to learn from the mistake, and it begins to impact their grade.

I think, though this is relevant to how I see GenAI and how we can engage it thoughtfully. Like Wikipedia or a simple Google search, students use the tools around them for help. We, as faculty, do the same. So, why not support students in using these tools in ways that will support them and not create issues down the road? To that end, while I do have control over my course content and strive to make the materials relevant, I don’t control whether students submit their own work or not. I am kind of resigned to the fact that if they’re going to cheat, they’ll find a way to cheat. And, from my experience, GenAI writing scores poorly anyway because it’s not meeting the assignment expectations.

Similarly, I’ve been doing some workshops that are introducing me to new-to-me GenAI tools that I believe can be useful, like other research strategies. Can I still use the library and Google scholar to find sources? Yes. Can I find additional sources through these new tools? Yes. Can I look at my source’s references list for additional sources? Yes. Should I do only one of these things and call it done? No. And that’s where teaching the critical use of these tools is so important. It’s not going to remove the need for other brainstorming, researching, drafting, revising, editing; it’s facilitating the process. (I should think about whether I like the word facilitating here….). This goes to Knowles (2004) argument for machine-in-the-loop writing where the author retains the rhetorical load. And, I believe editing will be the same. (I need to remember this for my article!)

Gen AI Biases and Homogenization of Language: Still A Cautious Integrationist

I am continuing my journey of reading through Sidney I. Dobrin’s book AI and Writing. I’ve reached the part of my reading where I’m getting to Dobrin suggesting effective prompting, use for visuals, and also biases and other issues that arise with using GenAI. Personally, I don’t like that the author is leaving so much of this messaging that asks questions about its use to the end of the book. Additionally, I think it’s too soft of an approach, too forgiving to GenAI.

So, if best practice is that we need to fact check, we need to be alert for biases and hallucinations, I keep coming back to why use it at all? The LLMs are trained on the data that gets input. We’ve seen enough of how the Internet works to know and trust that it’s been fed a wide range of content that is explicitly racist and sexist, for instance, which means LLMs are being trained on that content and it may show up. We also know it is tracing patterns and reproducing them, which means it’s reproducing mainstream voices and thinking. This has long been a topic of consideration in the field.

Specifically, there’s a body of work and a turn towards linguistic justice that pushes back against the homogenization of language (which is mentioned in the refusal of GenAI work happening in the field). We’ve long talked about how the books and texts that are popularized and reproduced, about the voices that we hear in the mainstream narrative, are often the privileged voices of white, straight men. When our history books and libraries primarily share these kinds of narratives, it results in a skewed history and view of the world. Even if we get minoritized voices into the history books or libraries, they are overwhelmed by the sheer number of voices reiterating the mainstream narrative. I mention this because this problem of LLMs reproducing the mainstream voices, thinking, styles of writing, and the like have long been a problem in the field. We are yet again faced with another instance where the dominant narratives are being pushed, whatever those might be, and the difficulty of finding and accessing alternate viewpoints. The field’s turn to linguistic justice is an attempt to remedy that, though it is a subset of the work happening in the field and, from my perspective, isn’t widely accepted yet. 

It’s also noteworthy that LLMs and technology have biases built into them, quite literally. Selfe and Selfe have talked about the ways designers of technology embed their own assumptions and biases about the world into what they do (arguably, I would say we all do this in our work), and as such LLMs fall prey to the same issue. Biases will appear in the literal coding and programming that makes it work. I am not well-versed in coding and programming, though I know enough to know that the language used and capabilities of that language shape the outcome. Often, there’s multiple ways to reach the same outcome, though in doing so the programmer is making assumptions about which pathway they think is the best, and with my again limited understanding I would say that it’s possible the logic of the program itself would show up as biased at some point in its output. In essence, it’s rhetorical choices happening there. This reminds me that I should look more into the rhetoric of code and programming, which I know some folks in the field are working on and exploring.

Dobrin also rightfully mentions that LLMs are limited to the data sets available to them, and one of these reasons is due to copyright and intellectual property. However, this leaves me concerned and wondering how copyright and intellectual property are being defined to begin with. Again, I’ve got a limited understanding of how this works, but it sounds like they’ve just scraped massive segments of the Internet. So, if it has access to ebooks and academic articles, would that be a violation of copyright and/or intellectual property? I suppose probably not because when publishing we grant the publisher certain permissions. But again it calls into question the ethical use of other people’s work, and their knowledge thereof. It, personally, doesn’t feel good to think that anything I put out into the world can be repackaged and reused however. And yet, I come back to this is what it means to be a digital citizen. I sign over my rights to many things and become complicit in the larger system of things.

Relatedly, I spent time with the refusing GenAI materials I mentioned in my last blog. There really isn’t much I disagree with, and several of their points are mentioned here and in my previous blog (concerns about IP/copyright, homogenization of language, biases embedded into technologies and systems). Where I still end up, though, is that it’s not going away. I did hear (about a month late) a news report that GenAI (I believe ChatGPT) hasn’t found a way to be financially viable. So, I suppose it could still fall out from under us, though I think it’ll just make way for the next iteration and won’t truly go away.

Finally, as a part of my teaching and trying to give a “fair shake” to GenAI, I’ve been taking workshops and have signed up for an online course that explores integrating GenAI into teaching. Yesterday, a workshop attendee shared how it’s still getting it wrong in their content area (I think it was something about the immune system? She asked it to create a study guide and missed critical parts), so we know it’s got problems. However, I personally learned more about the importance of correct prompting to get more effective output, and I found that to be helpful.

I’ve heard that GenAI is useful for looking at job ads and helping to prepare job materials, so I prompted GenAI to give me ideas for a lesson plan that would introduce how GenAI could be used in that way. I still feel that, while it gave me ideas and I used them as a starting place, it’s not producing content that someone can take wholesale. I have to combine it with my own content knowledge for the course, context of the class and students, and more to be able to pull something together. Certainly, I could’ve gone back and forth with GenAI or included all of those details in the initial prompting, though it still feels more like an efficient use of my time and like the end product is more “mine” if I use the output as a starting place but make heavy revisions. 

Anyway, the outcome was that several students shared with me that they found value in the task. I did not make it a requirement that they use it, so I had two in-class prompts (one for using GenAI and one for not using it). In looking over their work, so far most chose to use it. They shared they felt it was helpful and informative, and they helped me hone in on the key details of a job ad and not get so lost in the details. One student also shared that they felt more confident in their own assessment and analysis of what the job ad was asking for. Another student shared they got the idea to use their hobby of 3D printing as a point to mention in their job materials. So, it was fruitful, which is both concerning (because GenAI!) and of value, as students who used it seemed to have generally positive responses. (I should note at least one person said they felt it would’ve been more effective and efficient to do it on their own without GenAI’s help.)

So, I’m going to plug along as a cautious integrationist and continue to read, think, and write about it all.

A Cautious Integrationist of GenAI

I’m currently doing reading to help with developing and refining my own perspective of the intersections of GenAI and writing. Of course, I’m using this to work on an article that I intend to submit for publication sometime during these next few months.

Matt Davis and Kara Taczak wrote in a 2024 Composition Studies editorial about GenAI in writing, and specifically the different positionalities folks are taking up. I would describe myself as a cautious integrationist. I say this because I am drawing on my own perspective from scholars like Cindy Selfe, Gail Hawisher, Dennis Baron, and more who have explored in their own work the intersections of technologies and writing. What I mean by integrationist is that GenAI is here, and I don’t see it going away. I also mainly see resistance as futile. However, there’s a whole segment of the field that is focused on refusing GenAI. Admittedly, I need to read those perspectives more closely, and notably they are not advocating for a “head-in-the-sand” approach. And it’s for those reasons that I see myself as a cautious integrationist. I’ve got my concerns around ethics, loss of critical thinking, the environment, biases, upholding of white supremacist narratives and voices, and more. My goal, though, is to articulate my own positionality on the subject.

The stances in the field are so strong in this that I’m finding it challenging to find my place in the conversation at the moment. Do I want to uphold technology that is further white supremacy? Absolutely not. And, how many technologies and systems in place are already doing that? I don’t know that GenAI refusal is going to address that particular problem. And yet, it’s hard to discuss with my colleagues because they become so impassioned (and I’m glad they are!), wherever they lie in terms of this subject.

In striving to inform my positionality, I am currently reading Sidney I. Dobrin’s AI and Writing. And wow have I gotten mad at the text on multiple occasions so far. There are metaphors that are simply frightening, like working with GenAI is like working with the “smart kid” on a project as a “collaborator.” Certainly, if done well in these scenarios then both individuals can grow and learn from the experience. However, the framing of it, choosing the “smart kid,” suggests the all-too-common scenario of the “smart kid” doing almost exclusively all of the labor for the project. And then where is the learning happening in that approach?

Dobrin cites GenAI expert Cath Ellis who used a metaphor of climbing Mt. Everest or taking a helicopter to the top to explain how GenAI functions. The helicopter can take us there, though what we are learning if we take that shortcut? We are missing the experience of climbing, the challenges and growth that happens as a result. We leave the mountain simply having used our capital to reach the end without taking the journey to get there. At the moment, I’m hard pressed to think of an instance where taking the helicopter would be the ideal pathway forward. The journey can suck, quite frankly, though again that’s where learning and growth happens, and over time the individual becomes more proficient, efficient, skilled, etc.

Broadly speaking, I can appreciate and recognize some value in thinking of GenAI as a collaborator. For instance, it helps to identify and correct grammatical errors, though it is sometimes wrong. Additionally, it can serve as a faux interlocutor (it can’t be a real one because it doesn’t think), and it can provide feedback on a draft with correct prompting. However, all of its uses, in my mind, require a strong working knowledge at the start to discern when it is correct and when it is going off the rails. And, if that’s the case, what are we doing using it at all? Just like with peer reviews on our work, we have to take feedback, whether grammatical, stylistic, content, etc., with a grain of salt. And just like with peer reviews, having an exchange about our research and writing and thinking has merit, so then the real question becomes whether a faux exchange has any merit. Right now, I only see it as an imitation (not sure my thoughts on the quality of the imitation–and it’d likely change by context) of what a human would do. Sometimes, the right human to fill a need may not be available (e.g., if I’d benefit from writer’s block at a time where my network isn’t available to talk through something), so perhaps then?

Dobrin really advocates for more collaboration with GenAI, with caution given the hallucinations, biases, and the like. However, after having read the chapter on the writing process and having played with GenAI on my own, I can really only see it in the inventio stage of writing, as GenAI can function as a tool to bounce ideas off of, get different perspective, get some starting names or concepts to look up for research, and the like. Beyond that, I don’t know how much what it does can hold any water.

And a quick note about a pet peeve: the conflating of AI with GenAI. AI has been around a while and does important work (and even some GenAI is moving in that direction), such as with emergency weather detection and notices. So, I don’t think we can or should write it off writ-large. However, we absolutely need to be examining and considering what is behind the scenes making it happen so we can properly assess the “how” of what we would potentially benefit from with its use in such cases.


Note: I asked Microsoft Copilot to suggest a title and here’s the output:

Thanks for sharing your blog—it’s thoughtful, nuanced, and clearly grappling with the complexities of GenAI in writing. Based on its tone and themes, here are a few title suggestions that reflect your cautious yet critical engagement:

Reflective & Academic

  1. “Navigating the GenAI Terrain: A Cautious Integrationist’s Perspective on Writing and Technology”
  2. “Between Resistance and Integration: Positioning Myself in the GenAI and Writing Debate”
  3. “Writing with Machines: Ethical Tensions and the Role of GenAI in Composition”

Metaphorical & Thought-Provoking

  1. “Climbing the Mountain, Not Taking the Helicopter: Learning, Labor, and GenAI in Writing”
  2. “Collaborating with the ‘Smart Kid’: Rethinking GenAI’s Role in the Writing Process”
  3. “Echoes of White Supremacy in the Machine: A Writer’s Ethical Dilemma with GenAI”

Concise & Catchy

  1. “GenAI and Me: A Cautious Conversation”
  2. “Writing in the Age of GenAI: Ethics, Resistance, and Integration”
  3. “The Faux Interlocutor: GenAI’s Place in the Writing Process”

Would you like the title to lean more academic, poetic, provocative, or something else entirely? I can tailor more options based on your preferred tone or audience.


Back to me: I don’t like any of these titles. However, it did give me the idea to use “Cautious Integrationist” in my title.