
When I first wrote about ChatGPT in 2023, I was exploring what it could do and how I could use it in the writing process—and I encouraged readers to explore as well.
Two years later, the landscape has evolved significantly. We’ve seen lawsuits against AI companies for infringing copyright to train their models. The models themselves have become more sophisticated. And people are getting stronger in their opinions about how AI should—or should not—be used, especially in creative fields like writing.
Within the broader field of ethics, AI ethics is emerging as a distinct discipline. I want to explore some of the ethical questions I’ve seen bubbling up lately about the use of AI in the writing process.
(When I say AI, I mean generative AI and large language models like Claude and ChatGPT.)
The use of AI in writing
The question of whether AI should be used draws strong reactions: Some writers are adamantly opposed to using AI at all. Others see no reason to hold back on letting it write for them. Most probably fall somewhere between the two positions.
I view AI as a tool that needs to be used properly. Here’s how I think about my use of it (I usually use ChatGPT, occasionally Claude)…
Brainstorming? Yep.
I use ChatGPT to generate ideas for titles, headings, keywords, wording alternatives, and more. While I rarely use exactly what it suggests, its ideas get me out of my own head and point to new paths I could try.
I see this as an excellent use of AI tools, and from conversations with other writers, it seems to be a common use. As Bloomsbury CEO Nigel Newton recently noted, AI can help people get past creative hesitation such as writer’s block, possibly opening creative venues to more people.
The key is that the human must assess the possibilities AI offers and make the decision about what is worth using.
Editing? Yep, but with caution.
When I work on client books, I do the editing: I make the editorial decisions and hands-on changes. Full stop.
However, I do use AI to get feedback. When I want a second opinion, I often copy in a sentence or paragraph and prompt, “Check for correctness,” “check for clarity,” “check commas.” ChatGPT usually confirms what I thought or points me to a rule I need to check.
But sometimes ChatGPT is flat-out wrong. The thing is, I know enough to know when it may be wrong. I know the sources to check if I am unclear on the rules, and if I am still unsure, I have other editors to consult. Just as I would never tell you to rely solely on spellcheck or Grammarly, I would never tell you to rely solely on ChatGPT for editing.
This is probably a big gray area for many writers. Where do you draw the line on taking AI’s advice on making corrections to punctuation or grammar and “smoothing the language” or “enhancing clarity”? You must use judgment.
Writing from scratch? Nope.
I do not use AI to write from scratch, nor do I enjoy editing AI writing. I find most AI tools tend to flatten the language. For instance, in a talk at the 2025 Annual General Meeting of the Jane Austen Society of North America, Rachel Gevlin, assistant professor at Virginia Commonwealth University, described her college students’ reactions to ChatGPT’s attempt to write a short story in Jane Austen’s style. Even those who usually struggled with close reading could see the woodenness of the writing.
AI also turns legitimate craft devices into overused patterns. The biggest quibble I have right now is with the use of negative parallelisms—the “not this but that” construction. “Her pizza is not just tasty food but a masterpiece of culinary art.” (The record: 14 instances on one page.)
But setting aside the writing quality, the bigger issue for me is that in using AI to write for us, we forget how to think. We forget how to structure an argument. We can’t tackle tough, complex topics. We lose the unexpected leaps of logic that come with human creativity. We become intellectually dependent.
And putting on my futurist hat, I worry that as AI writing improves, cheap AI books will become the norm, with human-crafted books becoming available only to the affluent, thus furthering the class divisions we already see.
That said, there are some places where having AI draft a piece of writing could be useful. Think about people with dyslexia or language-processing differences, or those writing in a non-native language. AI tools can help them communicate more effectively; using them in these instances makes sense. There is not one right answer—nuance and judgment are needed.
What ChatGPT says about my use of it
I asked ChatGPT how it thought I used it for writing. Do I walk the talk? Using editorial discretion (it was repetitive!), here’s a slightly shortened version of what it said:
- You use AI as a thinking partner, not a ghostwriter. You tend to bring in drafts or concepts you’ve already written and ask for feedback on logic, tone, or structure. You don’t hand over creative control—you use AI to refine or test ideas.
- You maintain strong authorial voice and judgment. You’re clear about what kind of language you want. AI may generate alternatives, but you decide what fits your standards.
- You draw clear ethical lines. You often specify limits (“don’t rewrite unless incorrect,” “don’t use X phrasing”). This shows an awareness of authorship boundaries and intellectual integrity. You seem to treat AI as a tool within a human-led creative process, not a shortcut or substitute.
- You use AI iteratively. You rarely take one output as final. You refine, reframe, and revisit. This iterative dialogue mirrors traditional writing processes (draft–revise–edit), just with AI in the loop.
I’m pretty good with that approach for myself, and it is largely what I would suggest to other writers: Use AI tools for brainstorming and feedback, but do your own writing and trust your own voice.
Disclosing use of AI in writing
Let’s say you use AI, in some form or fashion, in your writing. Do you disclose it?
Transparency around AI use has become a public issue. For self-published authors, platforms like KDP and IngramSpark require disclosure of AI use in generating book text or cover art.
And traditional publishers are beginning to set AI policies for their authors. In book proposals, I’m seeing statements like “All writing and concept development was done by the author. ChatGPT was used for refinement of language and general editorial checking, but was not used to develop any content de novo.”
Can we all agree that claiming AI writing as your own is not ethical?
But how about editing and feedback? If AI is a tool like spellcheck or Grammarly, do you make a statement about using it? Seems ridiculous, doesn’t it?
To complicate matters, emerging research shows that disclosure about AI use can actually reduce trust. AI ethics consultant Felicity Wild summarized the pattern found across 13 experiments:
When people disclosed using AI for their work—whether grading student assignments, writing job applications, creating investment advertisements, drafting performance reviews or even composing routine emails—others trusted them significantly less than if they’d said nothing at all.
Does context matter? Nope. The effect persists very broadly, across different tasks, readers, and professional situations.
Additionally, presenting use of AI as “just a tool” (as I did above) does not change the perception.
So… do we not disclose anything? Nope again. Wild says, “When AI use is exposed by third parties, trust crashes even harder than voluntary disclosure.”
Wild hypothesizes that these reactions come from fearing that AI is “displacing human judgment inappropriately.” As a result, she suggests that disclosures lead with value and judgment rather than methodology and tools. For instance, she suggests:
Rather than: “We’re transparent about our AI usage.”
Try: “We’re deliberate about where AI adds value and where human expertise is non-negotiable.”
What does all this research and speculation mean for you as a writer? I’m not entirely sure, and I expect my thinking to continue developing. What I do know is that ethics around AI use is something we all need to consider—even if we don’t all land on the same answers.
(Wild’s article is really thought provoking. It is worth a read, especially if AI ethics is surfacing in your world.)
How do you use AI in your writing?
I think you can see where I land on the use of AI and transparency, but I’m very curious to hear about your experiences.
- How do you use AI in your writing? What value do you gain?
- If you don’t use AI, why not?
- How do you disclose your AI use, if you do? Do you think it matters?
- When you see someone disclose their use of AI, what’s your reaction?
Send me an email to let me know. If I get something interesting, I’ll share it in an update to this article (with permission, of course).
AI writing will certainly continue to improve. And it is disingenuous not to expect people to use it. The big question is: How do we maintain our humanity at the same time?
If you need help on your book project and don’t mind digging into philosophical ponderings once in a while, we might be a good fit. Get in touch at karin@clearsightbooks.com.

