
Originally published October 30, 2025; updated November 25, 2025.
When I first wrote about ChatGPT in 2023, I was exploring what it could do and how I could use it in the writing process—and I encouraged readers to explore as well.
Two years later, the landscape has evolved significantly. We’ve seen lawsuits against AI companies for infringing copyright to train their models. The models themselves have become more sophisticated. And people are getting stronger in their opinions about how AI should—or should not—be used, especially in creative fields like writing.
Within the broader field of ethics, AI ethics is emerging as a distinct discipline. I want to explore some of the ethical questions I’ve seen bubbling up lately about the use of AI in the writing process.
(When I say AI, I mean generative AI and large language models like Claude and ChatGPT.)
The use of AI in writing
The question of whether AI should be used draws strong reactions: Some writers are adamantly opposed to using AI at all. Others see no reason to hold back on letting it write for them. Most probably fall somewhere between the two positions.
I view AI as a tool that needs to be used properly. Here’s how I think about my use of it (I usually use ChatGPT, occasionally Claude)…
Brainstorming? Yep.
I use ChatGPT to generate ideas for titles, headings, keywords, wording alternatives, and more. While I rarely use exactly what it suggests, its ideas get me out of my own head and point to new paths I could try.
I see this as an excellent use of AI tools, and from conversations with other writers, it seems to be a common use. As Bloomsbury CEO Nigel Newton recently noted, AI can help people get past creative hesitation such as writer’s block, possibly opening creative venues to more people.
The key is that the human must assess the possibilities AI offers and make the decision about what is worth using.
Editing? Yep, but with caution.
When I work on client books, I do the editing: I make the editorial decisions and hands-on changes. Full stop.
However, I do use AI to get feedback. When I want a second opinion, I often copy in a sentence or paragraph and prompt, “Check for correctness,” “check for clarity,” “check commas.” ChatGPT usually confirms what I thought or points me to a rule I need to check.
But sometimes ChatGPT is flat-out wrong. The thing is, I know enough to know when it may be wrong. I know the sources to check if I am unclear on the rules, and if I am still unsure, I have other editors to consult. Just as I would never tell you to rely solely on spellcheck or Grammarly, I would never tell you to rely solely on ChatGPT for editing.
This is probably a big gray area for many writers. Where do you draw the line on taking AI’s advice on making corrections to punctuation or grammar and “smoothing the language” or “enhancing clarity”? You must use judgment.
Writing from scratch? Nope.
I do not use AI to write from scratch, nor do I enjoy editing AI writing. I find most AI tools tend to flatten the language. For instance, in a talk at the 2025 Annual General Meeting of the Jane Austen Society of North America, Rachel Gevlin, assistant professor at Virginia Commonwealth University, described her college students’ reactions to ChatGPT’s attempt to write a short story in Jane Austen’s style. Even those who usually struggled with close reading could see the woodenness of the writing.
AI also turns legitimate craft devices into overused patterns. The biggest quibble I have right now is with the use of negative parallelisms—the “not this but that” construction. “Her pizza is not just tasty food but a masterpiece of culinary art.” (The record: 14 instances on one page.)
But setting aside the writing quality, the bigger issue for me is that in using AI to write for us, we forget how to think. We forget how to structure an argument. We can’t tackle tough, complex topics. We lose the unexpected leaps of logic that come with human creativity. We become intellectually dependent.
And putting on my futurist hat, I worry that as AI writing improves, cheap AI books will become the norm, with human-crafted books becoming available only to the affluent, thus furthering the class divisions we already see.
That said, there are some places where having AI draft a piece of writing could be useful. Think about people with dyslexia or language-processing differences, or those writing in a non-native language. AI tools can help them communicate more effectively; using them in these instances makes sense. There is not one right answer—nuance and judgment are needed.
What ChatGPT says about my use of it
I asked ChatGPT how it thought I used it for writing. Do I walk the talk? Using editorial discretion (it was repetitive!), here’s a slightly shortened version of what it said:
- You use AI as a thinking partner, not a ghostwriter. You tend to bring in drafts or concepts you’ve already written and ask for feedback on logic, tone, or structure. You don’t hand over creative control—you use AI to refine or test ideas.
- You maintain strong authorial voice and judgment. You’re clear about what kind of language you want. AI may generate alternatives, but you decide what fits your standards.
- You draw clear ethical lines. You often specify limits (“don’t rewrite unless incorrect,” “don’t use X phrasing”). This shows an awareness of authorship boundaries and intellectual integrity. You seem to treat AI as a tool within a human-led creative process, not a shortcut or substitute.
- You use AI iteratively. You rarely take one output as final. You refine, reframe, and revisit. This iterative dialogue mirrors traditional writing processes (draft–revise–edit), just with AI in the loop.
I’m pretty good with that approach for myself, and it is largely what I would suggest to other writers: Use AI tools for brainstorming and feedback, but do your own writing and trust your own voice.
Disclosing use of AI in writing
Let’s say you use AI, in some form or fashion, in your writing. Do you disclose it?
Transparency around AI use has become a public issue. For self-published authors, platforms like KDP and IngramSpark require disclosure of AI use in generating book text or cover art.
And traditional publishers are beginning to set AI policies for their authors. In book proposals, I’m seeing statements like “All writing and concept development was done by the author. ChatGPT was used for refinement of language and general editorial checking, but was not used to develop any content de novo.”
(See further information on policies for KDP, IngramSpark, and Wiley after the article.)
Can we all agree that claiming AI writing as your own is not ethical?
But how about editing and feedback? If AI is a tool like spellcheck or Grammarly, do you make a statement about using it? Seems ridiculous, doesn’t it?
To complicate matters, emerging research shows that disclosure about AI use can actually reduce trust. AI ethics consultant Felicity Wild summarized the pattern found across 13 experiments:
When people disclosed using AI for their work—whether grading student assignments, writing job applications, creating investment advertisements, drafting performance reviews or even composing routine emails—others trusted them significantly less than if they’d said nothing at all.
Does context matter? Nope. The effect persists very broadly, across different tasks, readers, and professional situations.
Additionally, presenting use of AI as “just a tool” (as I did above) does not change the perception.
So… do we not disclose anything? Nope again. Wild says, “When AI use is exposed by third parties, trust crashes even harder than voluntary disclosure.”
Wild hypothesizes that these reactions come from fearing that AI is “displacing human judgment inappropriately.” As a result, she suggests that disclosures lead with value and judgment rather than methodology and tools. For instance, she suggests:
Rather than: “We’re transparent about our AI usage.”
Try: “We’re deliberate about where AI adds value and where human expertise is non-negotiable.”
What does all this research and speculation mean for you as a writer? I’m not entirely sure, and I expect my thinking to continue developing. What I do know is that ethics around AI use is something we all need to consider—even if we don’t all land on the same answers.
(Wild’s article is really thought provoking. It is worth a read, especially if AI ethics is surfacing in your world.)
How do you use AI in your writing?
I think you can see where I land on the use of AI and transparency, but I’m very curious to hear about your experiences.
- How do you use AI in your writing? What value do you gain?
- If you don’t use AI, why not?
- How do you disclose your AI use, if you do? Do you think it matters?
- When you see someone disclose their use of AI, what’s your reaction?
Send me an email to let me know. If I get something interesting, I’ll share it in an update to this article (with permission, of course).
AI writing will certainly continue to improve. And it is disingenuous not to expect people to use it. The big question is: How do we maintain our humanity at the same time?
If you need help on your book project and don’t mind digging into philosophical ponderings once in a while, we might be a good fit. Get in touch at karin@clearsightbooks.com.
Related Reading
AI Disclosures for KDP, IS, and Wiley
After publishing the article above, the biggest question I got was about the AI-related disclosures on Kindle Direct Publishing (KDP) and IngramSpark, the two major print-on-demand platforms self-publishers use. I share what I found below, but if you have any doubt about your situation, you should contact the respective service’s support for assistance. Additionally, Wiley, a traditional publisher of academic and professional books, recently issued AI guidelines for authors submitting book proposals.
Summary:
- KDP: AI-generated allowed but must be disclosed
- IngramSpark: AI-generated not allowed
- Wiley: AI-generated allowed but must be disclosed (they provide detailed, nuanced guidelines)
KDP
Here is what KDP says about AI in their Content Guidelines.
We require you to inform us of AI-generated content (text, images, or translations) when you publish a new book or make edits to and republish an existing book through KDP. AI-generated images include cover and interior images and artwork. You are not required to disclose AI-assisted content. We distinguish between AI-generated and AI-assisted content as follows:
- AI-generated: We define AI-generated content as text, images, or translations created by an AI-based tool. If you used an AI-based tool to create the actual content (whether text, images, or translations), it is considered “AI-generated,” even if you applied substantial edits afterwards.
- AI-assisted: If you created the content yourself, and used AI-based tools to edit, refine, error-check, or otherwise improve that content (whether text or images), then it is considered “AI-assisted” and not “AI-generated.” Similarly, if you used an AI-based tool to brainstorm and generate ideas, but ultimately created the text or images yourself, this is also considered “AI-assisted” and not “AI-generated.” It is not necessary to inform us of the use of such tools or processes.
You are responsible for verifying that all AI-generated and/or AI-assisted content adheres to all content guidelines, including by complying with all applicable intellectual property rights.
The problem (as I remember it) is that the disclosure question when you set up your book on KDP does not quite align with the TOS definitions and might be a yes/no question (if so, AI-generated=Yes, AI-assisted=No). (Next time I set up a book, I will grab a screenprint to add here.)
At the moment, it appears that KDP requires disclosure but still allows AI-generated books. Not so with IngramSpark…
IngramSpark
From IngramSpark’s Catalog Integrity Guidelines, which spell out all the reasons your book could be rejected (so they are definitely worth reading):
The below criteria describe the types of content that may not be accepted:
…
7. Content created using automated means, including but not limited to content generated using artificial intelligence or mass-produced processes.
And from IngramSpark’s article “Book Marketing Trends for 2024 and Beyond”:
Not only do publishers have to start watching out for AI-written content, but they need to acknowledge that AI can also be helpful for authors and publishing houses alike.
It can streamline the creation process, from the brainstorming of ideas to outlining and editing. AI writing prompts can help authors get started if they’re staring at a blank page, and AI editing tools can be used as a second set of eyes for authors in checking their work for grammatical errors or spelling mistakes.
However, AI can become a problem when it’s not used the right way.
Based on my read of the guidelines and this article, I infer that when IngramSpark says they may reject “content generated using artificial intelligence,” they mean AI cannot write the content but the use of AI assistance in brainstorming, editing, and so on is okay.
Wiley
Wiley’s AI guidelines offer much more nuance than those of KDP and IngramSpark—as one would hope from a traditional publisher with real humans managing the process (compared to heavily automated platforms).
From their AI guidelines:
Human Oversight. Authors may only use AI Technology as a companion to their writing process, not a replacement. As always, authors must take full responsibility for the accuracy of all content and verify that all claims, citations, and analyses align with their expertise and research. Before including AI-generated content in their Material, authors must carefully review it to ensure the final work reflects their expertise, voice, and originality while adhering to Wiley’s ethical and editorial standards.
Disclosure. Authors should maintain documentation of all AI Technology used, including its purpose, whether it impacted key arguments or conclusions, and how they personally reviewed and verified AI-generated content. Authors must also disclose the use of AI Technologies when submitting their Material to Wiley.
I found it interesting how supportive Wiley was of the use of AI, and I appreciated the thoroughness and nuance they brought to the guidelines compared to KDP and IngramSpark.

