“Writing teachers and researchers should not fear the coming swarm.” BHD

The journey of reading and thinking and engaging with GenAI continues. I think, compared to when I started blogging about this, I am more squarely in the camp of we need to be writing and researching and studying GenAI, though it is no panacea (my friend Nick used this term and I found it quite accurate).

Over these last few weeks, I’ve been reading more from scholars (Hart-Davidson et al., 2024; Pigg, 2024) who are more squarely in TPC, and that has had quite a large influence. Pigg used recordings of folks using GenAI in their writing process, which I appreciated because then she wasn’t complicit in her own use of it (I’m still struggling with the environmental impact). Hart-Davidson et al. explored how the history of TPC can inform the future, namely with GenAI. Bill HD is quoted as saying the following in his prior work:

“Writing teachers and researchers should not fear the coming swarm. As we engage these bits of code to do what is operationally necessary, we will have an expanding scope of rhetorical action to investigate, embrace, and yes, teach” (p. 334).

and…

“[Y]ou should be building robots. Robots need direction. Someone who knows writing practices and how they work in social structures must be the brains that set them in motion, tell them how to listen and how to respond, tell them when they are going too far and when they could be doing something more” (p. 334).

Walker, J. R., Blair, K. L., Eyman, D., Hart-Davidson, B., McLeod, M., Grabill, J., &
Vitanza, V. J. (2011). Computers and composition 20/20: A conversation piece,
or what some very smart people have to say about the future. Computers and
Composition, 28(4), 327-346.

This captures my sentiments really quite well. If it’s not us who will train the people who train the robots, then who? How can we give them the critical skills to engage the technologies if we refuse to engage those technologies? To this end, I really think students have to experiment and play, just as we do as their instructors. I really liked how Colby (2025) anchored her work to play in that sort of way. Learning happens by doing, and so we must do. In this case, “doing” is engaging with GenAI so that we can see where it excels, where it falls short, and every other possibility that there is.

This idea of “we must do,” though just brought up the question for me: can we adequately critique GenAI if we do not engage with GenAI itself? And I do think the answer is yes because we can draw on the studies and knowledge that exists out there, we can do similar to Pigg and look up recordings and accounts from users. Still, though, we have to be able to teach this knowledge to our students.

Over these last few weeks, I’ve been working on trainings related to integrating AI into the classroom. A few things stood out to me: I heard on multiple occasions that GenAI writing is “indistinguishable,” which I find really horrifying that folks would say this. Though, I do recognize that I am a writing teacher, and that is likely shaping my feelings (which are quite strong, I must say). Another piece that stood out to me, though, is that we should be in the business of teaching and not trying to catch out students cheating.

Whenever cheating comes up in my classrooms, whether that’s plagiarism or GenAI (typically those two since it is a writing class), I tell my students that I err on the side of trusting them. My first response is to request a conversation and ask questions like “how did this happen?” I make sure that they understand what the issue is, and then I give them the opportunity to makeup the mistake. I’ve been told this approach is too soft and students could be lying to me. Yes, they could be lying to me. But I’m not in the business of catching students lying to me. They’ve made that choice, if they have that is, and I can’t make them do anything different. Now, on subsequent occurrences, then I start to become more harsh because they’ve had the opportunity to learn from the mistake, and it begins to impact their grade.

I think, though this is relevant to how I see GenAI and how we can engage it thoughtfully. Like Wikipedia or a simple Google search, students use the tools around them for help. We, as faculty, do the same. So, why not support students in using these tools in ways that will support them and not create issues down the road? To that end, while I do have control over my course content and strive to make the materials relevant, I don’t control whether students submit their own work or not. I am kind of resigned to the fact that if they’re going to cheat, they’ll find a way to cheat. And, from my experience, GenAI writing scores poorly anyway because it’s not meeting the assignment expectations.

Similarly, I’ve been doing some workshops that are introducing me to new-to-me GenAI tools that I believe can be useful, like other research strategies. Can I still use the library and Google scholar to find sources? Yes. Can I find additional sources through these new tools? Yes. Can I look at my source’s references list for additional sources? Yes. Should I do only one of these things and call it done? No. And that’s where teaching the critical use of these tools is so important. It’s not going to remove the need for other brainstorming, researching, drafting, revising, editing; it’s facilitating the process. (I should think about whether I like the word facilitating here….). This goes to Knowles (2004) argument for machine-in-the-loop writing where the author retains the rhetorical load. And, I believe editing will be the same. (I need to remember this for my article!)

Leave a Comment