Young girl holding hands with a robot. Text: ChatGPT & writing: Making friends with AI

Last month, in a reference to ChatGPT, I made a flippant comment about being a slow adopter of new technology. And generally I am. But I decided to give ChatGPT a whirl to see what all the hubbub was about. To my surprise, I’ve started keeping a browser tab open for ChatGPT, alongside my Chicago Manual of Style and Merriam-Webster.

I am using the basic free version, and the types of things I’m doing with it are pretty…basic. I’ve not added any plug-ins or paid tools. On LinkedIn I’m seeing loads of posts about these kinds of enhancements, but I gotta start simple.

Here’s how I’ve been using ChatGPT so far in my writing and editing work.

Gathering background information

When writing about a less familiar topic, it can get tedious doing Google searches and visiting numerous sites, which may or may not have relevant information. So one of the first things I started exploring with ChatGPT is the ability to gather background information.

For example, I’ve asked (actual sloppy typing):

  • “what is interval training and how does it help heart? is it better than other types of training?”
  • “what type of heart rates are used in assessing heart health? what rate should i target during exercise?”
  • “describe atherosclerosis”

Having heard numerous warnings about ChatGPT accuracy, I’ve been limiting my queries to areas I have some general familiarity with. The answers provide enough to prompt my memory or give me a general understanding.

I’m thinking of ChatGPT as being a tool similar to Wikipedia—a good place to start for basic information, but not an authoritative source (i.e., don’t cite it as a source).

Finding statistics

Similarly, I’m using ChatGPT to find high-level statistics. For example, “provide the most important statistics about cardiovascular disease in the US.”

After my first request, I immediately learned to say “please include links to citations.”

Part of the problem I’ve found in asking about statistics is that the response can include many different sources; the wording may be similar but not the same, and numbers for seemingly the same thing can be different. Additionally, ChatGPT isn’t doing live searches; it’s working with a dataset it was trained on. The dataset stops in 2021, so anything after that is missing.

(Me: “what is the date chatGPT is trained through? sometime in 2021?” ChatGPT: Yes, that’s correct. ChatGPT is trained on a dataset with a knowledge cutoff in September 2021. Any events, discoveries, or other information that occurred after that date would be unknown to ChatGPT unless specifically mentioned during our conversation.)

If I’m just trying to understand the magnitude of the problem—cardiovascular disease is the leading cause of death in the US, with one in three deaths related to it—ChatGPT’s response is adequate. If I’m going to cite numbers in whatever I’m writing, I’m going to follow the links to find the latest authoritative data from the Centers for Disease Control or the American Heart Association.

Finding comp titles

Comparable and competitive titles—“comps”—are used in book proposals. What books will your book be up against in the market? When I ran across this article about asking ChatGPT for help finding relevant titles, I thought it offered a brilliant idea.

My prompt: “Suggest 20 comp titles for nonfiction books about nutrition. Do not include any novels. Books should be published within the past three years.” As soon as I saw the list, I had a flurry of refinements: “mid-list books only, no bestsellers”; “include the publication year”; “published 2020 or later.”

For comps to be relevant, they need to be within roughly the past three years. And that is the big constraint right now: ChatGPT’s dataset ends in 2021.

Revising copy

Sometimes I get stuck in a writing rut. By feeding a draft into ChatGPT, I’ve been able to get new ideas about how to approach a piece.

For example, I’ve entered a book description draft and said, “help make this more compelling for a potential buyer.” On the plus side, ChatGPT shook things up, offered new possibilities, and got salesier than I usually do. On the minus side, it used way too many adjectives and cliches, made things way too hyperbolic, and was way too repetitive.

If you need to punch things up, give ChatGPT a try with your copy, but be aware that you’ll need to keep working on the results.

Creating outlines and summaries

I’ve seen some folks suggest using ChatGPT to create outlines and summaries of longer works. So far I’ve had limited success.

I’ve tried entering the chapter of a book and asking for a summary. It was sort of right, but not exactly, and I ended up having to do a lot of revision to make it work.

I tried asking for an outline of the same chapter. The outline laid out what was there, but ChatGPT did not improve it in any way. Perhaps I had too much structure already built in. I’ll keep playing with both of these.

Will keep playing with this one.

Asking sloppy questions

One thing I love about ChatGPT, especially compared to Google, is that I can ask sloppy questions when I don’t really know what I want, and then iterate to get closer. ChatGPT remembers what I’ve been asking and helps me along in that refinement.

For example: “at the swimming pool when i was a kid there was a machine that we would crank that would wring out our towels. what was it called?”

The machine you’re referring to is called a towel wringer or a clothes wringer. It was a common device used in the past to remove excess water from clothes and towels before they were hung up to dry.

But I wasn’t quite certain that’s what I was looking for (seemed too obvious), so I asked, “what does a towel wringer look like?” After getting a description of it and examples of where it was used, I asked, “what is a mangle?” This answer got me the clarity in that in the US mangles were larger and more industrial, whereas towel ringers or clothes ringers were smaller and more domestic. I’ll note that I asked for a picture of a mangle, but ChatGPT was unable to supply images.

I’ve asked similar series of questions trying to find other terms out of my brain’s grasp, including this fun one: “i am trying to find the word for fancy that is pronounced something like zhuzhi or jeuji. what is it?” I finally got to “jus-jus,” which is a slang term for something that is fancy or elaborate, often used in the fashion industry. It is pronounced “zhoo-zhoo” or “zhuzh.” … There are other spellings for “jus-jus” that you might come across, including “zhoozh,” “zhoosh,” “jooj,” or “juj.” No wonder I couldn’t remember how to spell it!

Generating word ideas

Some folks use ChatGPT for generating gift ideas or blog post ideas. I use it for generating word ideas! While I usually have handy, I’ve been finding ChatGPT helpful in finding synonyms when I don’t know exactly what I want a synonym for.

For instance: “what is a word that means ridiculously antiquated and illogical?”

One word that could convey the meaning of “ridiculously antiquated and illogical” is “archaic.” It refers to something that is outdated, old-fashioned, and no longer relevant or logical in the present context. Another word that captures this sense is “absurd,” which means wildly illogical, unreasonable, or contrary to common sense.

Again, I’m loving the ability to be sloppy in my original question and then iterate as needed.

Asking quick questions

Sometimes I am lazy. I just want a quick answer even when I know where to go for better info. My quick question to ChatGPT about the date through which it is trained was probably accurate (at least I hope it knows its own training dates), but other things are hit-and-miss.

I tried asking “who carries waterman pens in raleigh, nc?” Because of the date limitation, two of the four options ChatGPT provided no longer even existed.

The other day I tried this quick question: “on ingramspark how many characters can you use in the keyword field?”

As of my knowledge cutoff in September 2021, IngramSpark does not have a specific limit mentioned for the number of characters allowed in the keyword field.

Bzzt. I happen to know there is a limit, but I couldn’t remember what it was and was being lazy; I had to get unlazy.

I also asked “where is the field to indicate bleed or no bleed on ingramspark.”

When you are setting up your print file on IngramSpark, you will be asked to select whether your file has bleed or not. This option is available in the “Interior & Paper” section of the setup process.

This answer, I am pretty sure, is reasonably correct.

My point: If you are asking factual questions along these lines, you may need to discern whether ChatGPT is the place to go, or whether Google (with real-time info) is a better option.

“Computer, please analyze…”

In playing with ChatGPT I am starting to feel like Geordi La Forge in Star Trek asking the computer to analyze the latest interstellar anomaly—“Computer, please analyze the trajectory of the moon about to hit the planet…”—and then refining his query when he needs something more or different.

(Funny story: I was trying to find a real example from an episode about an asteroid hitting a planet. ChatGPT gave me not one incorrect answer but three. At least it apologized when I pointed out its errors.)

ChatGPT feels like the very beginnings of our Star Trek computer. It’s useful for gathering basic information, generating ideas, and offering alternatives, but it definitely has limits with respect to data and creativity. For now, I’d absolutely recommend spending some time exploring it as a general tool. In your writing, I’d suggest using it as a starting place but definitely not as an ending place.

We don’t use AI to write your book or to answer our email! Contact a live person at