I’m currently doing reading to help with developing and refining my own perspective of the intersections of GenAI and writing. Of course, I’m using this to work on an article that I intend to submit for publication sometime during these next few months.
Matt Davis and Kara Taczak wrote in a 2024 Composition Studies editorial about GenAI in writing, and specifically the different positionalities folks are taking up. I would describe myself as a cautious integrationist. I say this because I am drawing on my own perspective from scholars like Cindy Selfe, Gail Hawisher, Dennis Baron, and more who have explored in their own work the intersections of technologies and writing. What I mean by integrationist is that GenAI is here, and I don’t see it going away. I also mainly see resistance as futile. However, there’s a whole segment of the field that is focused on refusing GenAI. Admittedly, I need to read those perspectives more closely, and notably they are not advocating for a “head-in-the-sand” approach. And it’s for those reasons that I see myself as a cautious integrationist. I’ve got my concerns around ethics, loss of critical thinking, the environment, biases, upholding of white supremacist narratives and voices, and more. My goal, though, is to articulate my own positionality on the subject.
The stances in the field are so strong in this that I’m finding it challenging to find my place in the conversation at the moment. Do I want to uphold technology that is further white supremacy? Absolutely not. And, how many technologies and systems in place are already doing that? I don’t know that GenAI refusal is going to address that particular problem. And yet, it’s hard to discuss with my colleagues because they become so impassioned (and I’m glad they are!), wherever they lie in terms of this subject.
In striving to inform my positionality, I am currently reading Sidney I. Dobrin’s AI and Writing. And wow have I gotten mad at the text on multiple occasions so far. There are metaphors that are simply frightening, like working with GenAI is like working with the “smart kid” on a project as a “collaborator.” Certainly, if done well in these scenarios then both individuals can grow and learn from the experience. However, the framing of it, choosing the “smart kid,” suggests the all-too-common scenario of the “smart kid” doing almost exclusively all of the labor for the project. And then where is the learning happening in that approach?
Dobrin cites GenAI expert Cath Ellis who used a metaphor of climbing Mt. Everest or taking a helicopter to the top to explain how GenAI functions. The helicopter can take us there, though what we are learning if we take that shortcut? We are missing the experience of climbing, the challenges and growth that happens as a result. We leave the mountain simply having used our capital to reach the end without taking the journey to get there. At the moment, I’m hard pressed to think of an instance where taking the helicopter would be the ideal pathway forward. The journey can suck, quite frankly, though again that’s where learning and growth happens, and over time the individual becomes more proficient, efficient, skilled, etc.
Broadly speaking, I can appreciate and recognize some value in thinking of GenAI as a collaborator. For instance, it helps to identify and correct grammatical errors, though it is sometimes wrong. Additionally, it can serve as a faux interlocutor (it can’t be a real one because it doesn’t think), and it can provide feedback on a draft with correct prompting. However, all of its uses, in my mind, require a strong working knowledge at the start to discern when it is correct and when it is going off the rails. And, if that’s the case, what are we doing using it at all? Just like with peer reviews on our work, we have to take feedback, whether grammatical, stylistic, content, etc., with a grain of salt. And just like with peer reviews, having an exchange about our research and writing and thinking has merit, so then the real question becomes whether a faux exchange has any merit. Right now, I only see it as an imitation (not sure my thoughts on the quality of the imitation–and it’d likely change by context) of what a human would do. Sometimes, the right human to fill a need may not be available (e.g., if I’d benefit from writer’s block at a time where my network isn’t available to talk through something), so perhaps then?
Dobrin really advocates for more collaboration with GenAI, with caution given the hallucinations, biases, and the like. However, after having read the chapter on the writing process and having played with GenAI on my own, I can really only see it in the inventio stage of writing, as GenAI can function as a tool to bounce ideas off of, get different perspective, get some starting names or concepts to look up for research, and the like. Beyond that, I don’t know how much what it does can hold any water.
And a quick note about a pet peeve: the conflating of AI with GenAI. AI has been around a while and does important work (and even some GenAI is moving in that direction), such as with emergency weather detection and notices. So, I don’t think we can or should write it off writ-large. However, we absolutely need to be examining and considering what is behind the scenes making it happen so we can properly assess the “how” of what we would potentially benefit from with its use in such cases.
Note: I asked Microsoft Copilot to suggest a title and here’s the output:
Thanks for sharing your blog—it’s thoughtful, nuanced, and clearly grappling with the complexities of GenAI in writing. Based on its tone and themes, here are a few title suggestions that reflect your cautious yet critical engagement:
Reflective & Academic
- “Navigating the GenAI Terrain: A Cautious Integrationist’s Perspective on Writing and Technology”
- “Between Resistance and Integration: Positioning Myself in the GenAI and Writing Debate”
- “Writing with Machines: Ethical Tensions and the Role of GenAI in Composition”
Metaphorical & Thought-Provoking
- “Climbing the Mountain, Not Taking the Helicopter: Learning, Labor, and GenAI in Writing”
- “Collaborating with the ‘Smart Kid’: Rethinking GenAI’s Role in the Writing Process”
- “Echoes of White Supremacy in the Machine: A Writer’s Ethical Dilemma with GenAI”
Concise & Catchy
- “GenAI and Me: A Cautious Conversation”
- “Writing in the Age of GenAI: Ethics, Resistance, and Integration”
- “The Faux Interlocutor: GenAI’s Place in the Writing Process”
Would you like the title to lean more academic, poetic, provocative, or something else entirely? I can tailor more options based on your preferred tone or audience.
Back to me: I don’t like any of these titles. However, it did give me the idea to use “Cautious Integrationist” in my title.