AI and Technical Writing: Can a Computer Write a Manual?

As Artificial Intelligence (AI) becomes widely used in the corporate world, it’s increasingly important for technical writers to understand exactly what it is and isn’t capable of in the technical writing field. Do technical writers need to worry about being replaced? Can an AI write a technical manual?

What do we mean by AI?

Artificial Intelligence research focuses on developing computer systems that can learn, solve problems, and make decisions. In the context of technical writing, and throughout this blog post, AI refers to generative AI systems, which create new content based on patterns learned from the data they were trained on.

Text-based generative AI systems such as ChatGPT, Google Gemini, and Claude use natural language processing (NLP) to answer questions and have realistic-seeming conversations with their human users.

The AIs that were used during the making of this post were ChatGPT-4o (the “o” stands for “omni”) and Google Gemini, two of the most advanced generative AI systems currently accessible to the public.

What can AI currently do?

Although the capabilities of AIs are increasing very quickly, we often overestimate their current abilities, due partly to the human tendency to anthropomorphize and to the influence of popular science fiction depictions of AIs as fully conscious beings.

Despite appearances, current generative AIs using NLP are not actually thinking. They work by predicting the next words in a sentence based on a large database of relationships between words and phrases. An AI is following learned patterns rather than understanding context or meaning.

Still, the size and complexity of their databases means that AI performs quite well on some tasks, such as drafting outlines and content, brainstorming ideas, giving editing suggestions, handling routine content management, and summarizing analytics.

However, AIs also hallucinate (invent plausible-sounding but factually incorrect content), often struggle to maintain consistency across complex or lengthy documents, and sometimes produce content containing biased or offensive material, misinformation, conspiracy theories, or unrecognized satire, all of which was present in their vast training data. AI companies are working to fix these issues, but so far have not been as successful as they’d hoped.

For example, when asked a question like the one I asked ChatGPT-4o in the excerpt below, an AI would generally point out that it couldn’t know the answer because the event I’d asked about was still in the future (or, if applicable, it might state that the date in question falls after the cutoff date for the AI’s training material). However, in this case it hallucinated an answer instead.

A hallucination from ChatGPT-4o, generated on May 27, 2024 (after the Canucks had already been eliminated from the playoffs, and before it was determined which teams would play in the 2024 Stanley Cup final). The provided link did lead to Wikipedia, but the page linked to did not contain any information about who won the (still in the future) championship.

Where can AI be helpful in technical writing?

AI can be helpful when used as a tool to collaborate with human technical writers, taking on some of the more tedious tasks and freeing up the technical writer for more complex tasks that require human expertise and critical thinking.

Generating Initial Drafts. AI can produce a useful first pass at some common technical writing tasks:

Structuring a document based on a given format

Filling in the basic details of processes and procedures (especially for products that aren’t entirely new)

Entering repetitive text that a technical writer would normally copy and paste

Generating first drafts of alt text for images

The three images below show a portion of ChatGPT-4o’s initial draft of a manual for making toast, some feedback I gave it, and then its second draft. Although I provided the feedback in an unstructured, conversational format, it was able to extract the relevant information and revise the manual accordingly.

An excerpt from ChatGPT-4o’s first draft of a manual for making toast.

The feedback I provided, in a conversational, unstructured format.

The second draft.

Brainstorming. AI can act as a brainstorming aid:

  • Summarizing information to help a human technical writer quickly come up to speed before digging deeper

  • Producing multiple possible versions of descriptions, transitions, or summaries

  • Acting as a sounding board for ideas and questions

  • Suggesting questions to ask SMEs (even if it might not think to ask those questions on its own)

Conforming to a style guide. AI can act as a first-pass proofreader, pointing out style guide deviations and giving grammar and plain writing suggestions.

Audience-specific suggestions. AI can help tailor a manual to its intended audience, giving feedback in the persona of different audience members and making editing suggestions with specific audiences in mind

The following two images show a portion of ChatGPT-4o’s feedback on its toast-making manual in the personas of first a senior citizen and then a 10-year-old child.

ChatGPT-4o’s response when I asked it to give feedback on the manual from the perspective of a senior citizen.

ChatGPT-4o’s response when I asked it to give feedback on the manual from the perspective of a 10-year-old child.

Content management. AIs can help with content management systems (CMS):

  • Identifying and removing duplicate content

  • Detecting and flagging broken links

  • Managing version control and templates

  • Updating standard sections of documents

Analytics. AI can efficiently collect and summarize analytics information.

Of course, it’s always important to double check everything an AI produces, and to use common sense when reviewing its suggestions.

Where is AI less useful in technical writing?

Although AI can be a useful tool, it also has limitations and potential pitfalls.

Handling complex processes. Because AI doesn’t actually understand what it’s writing, it can have trouble describing complex, longer, or nuanced processes. It sometimes loses track of things, leading to inconsistencies or gaps in information, and it doesn’t always notice when it has contradicted itself.

The two images below show a potentially dangerous contradiction from the toast manual generated by Google Gemini. It thought there was a risk that the handle on the outside of the toaster might become electrified, so it recommended using metal tongs or a fork to press the handle down, despite knowing that touching an electrically charged item with a metal object can cause electric shock.

Google Gemini advising the user to use metal tongs to prevent electrical shock when contacting a lever that it (mistakenly) believes to be electrified. Red underlines added later.

Google Gemini showing that it did actually have access to the information that you should not touch an electrified lever with metal tongs.

Extrapolation. If AI is ever required to extrapolate it can often make bizarre assumptions that a human would never make, leading to factual errors or nonsensical content. It often has trouble grasping specific industry background knowledge, and generally has difficulty handling ambiguity or making judgement calls. Google Gemini’s assumption above, that users should avoid touching the lever on the toaster with their bare hands due to danger of electrical shock, is an example of AI extrapolation.

Missing information. Rather than flagging the areas where it’s missing information, AI will often simply make things up (this is the AI shortcoming known as “hallucination”). Furthermore, an AI is not only in danger of hallucinating if it’s missing information, but it will sometimes hallucinate even if it does have the correct information. This is one reason that its work must always be checked over by a human very carefully.

User research and information gathering. As implied by the sections on extrapolation and hallucination, an AI does not have the judgement needed to conduct effective, thorough user research, nor to draw out essential information that might exist only in someone’s head.

Tone. AI is generally good at mimicking human language, but sometimes has difficulty achieving the particular tone required for technical writing (clear, concise, yet friendly, using plain language principles).

The two images below show an example of a paragraph from Google Gemini’s first draft of the toast manual, followed by an example of a paragraph from its second attempt.

Google Gemini’s first attempt sounds more like a magazine article than a technical manual.

A paragraph from Google Gemini’s revised attempt, better but still not quite right.

Judgement and Content Review. AI can be useful for proofreading, but the final edit and review to ensure that the content is accurate, comprehensive, and user-friendly should always be done by a human.

Images and Diagrams. Because current AIs struggle with text within images, AI can have difficulty creating diagrams or even flowcharts.

The two images below are from a long series of attempts by ChatGPT-4o (which primarily uses the image-generating AI DALL-E to create images) to produce a useful flowchart showing the steps of its toasting procedure.

ChatGPT-4o’s first attempt at a flowchart, containing surreal text and images.

When ChatGPT-4o used another program to create the chart, it did better, but even after many iterations it wasn’t able to label the decision points, nor to include clear arrows that did not obscure the text.

Interestingly, after 25 unsuccessful iterations I expressed my frustration using a pop culture reference instead of continuing to carefully explain the issues with the flowchart, and ChatGPT-4o not only understood my reference to the streaming show “The Good Place”, but finally produced a functioning flowchart, as shown in the three images below.

On one hand, this shows that AI has a fairly sophisticated comprehension of the ways humans express themselves using language. On the other hand, it shows how unpredictable AI can be - I certainly didn’t expect that an off-handed reference to an unrelated piece of entertainment would suddenly make ChatGPT-4o understand what I was looking for in a chart about making toast!

The conversation in which I expressed my frustration at ChatGPT-4o repeatedly telling me it had fixed the issues I pointed out, and then producing a chart that still had exactly the same issues.

The workable flowchart that ChatGPT4-o finally produced in response to my frustrated pop culture reference. Notably, this was the first chart out of 26 attempts that had everything spelled correctly, arrows that pointed to the right place (and didn’t obscure the text), and correctly labelled decision points. The previous attempts had not made me feel encouraged that this would happen any time soon.

Out of curiosity, I asked ChatGPT-4o if it had understood my reference, and it not only correctly explained what had happened in the scene I’d been referring to, but also why I had referred to it in this context.



Tips for using AI in technical writing

It seems clear that while AI can be a useful tool for technical writers, it must be used very carefully. Here are some tips for how to effectively use AI as part of a technical writing workflow.

  • Use prompt engineering. Tailor your input to have a better chance of achieving the output you’re looking for. Always be specific and detailed in your prompts, and try other techniques:

  • Give the AI a persona. This will set the context of how it answers you. For example, if you want it to write a technical manual, first tell it that you want it to act as if it were a technical writer during your interaction.

  • Use the “chain of thought” method. Build up its work step by step, rather than all at once. For example, first ask it to make an outline, then to write the first line of each paragraph, then to add in the full text of each paragraph, and finally to check its work and make improvements.

  • Use the “fewshot” method. Feed the AI an example of what you want it to write (for example, previous sections of the manual, or sections of a similar manual) before you ask it to create anything.


Re-establish context. If you need to take a break while working with an AI, prime it to get back into context by pasting in your previous interaction, then asking it to summarize and then continue with the same conversation.

Use paid versions. Paid versions of AIs are generally more advanced than the free versions.

Always proofread and fact check. Think of the AI as a collaborator who is prone to making serious mistakes and needs their work checked over very carefully.

Conclusion: Can an AI write a technical manual?

As shown in the examples above, asking ChatGPT-4o and Google Gemini to write a simple manual for making toast had mixed results.

  • Both AIs wrote decent first drafts and were able to refine them when given feedback. However, they also both made incorrect assumptions about how toasters work (despite showing they had access to the correct information when asked follow-up questions).

  • Google Gemini gave dangerous advice (using metal tongs to touch a knob that it believed to be potentially electrified) and didn’t notice that it had contradicted itself within the space of a few paragraphs.

  • Google Gemini, in particular, had difficulties with achieving the proper tone.

  • hatGPT-4o, although it seemed very confident, struggled with diagrams (Google Gemini can’t produce images at all).

    I also asked both AIs to help with the writing of this blog post itself. They each produced a serviceable first draft when I gave them my outline, but I quickly discovered that it would be much easier for me to edit their drafts manually than to suggest repeated refinements. In the end, partly because it was a little overwhelming to have two separate first drafts (which would not have happened if I had chosen just one AI to work with, as would normally be the case) I didn’t use much of what they had written.

    However, when creating my own first draft I did make use of a few of the phrases the AIs had used in their drafts. I also asked ChatGPT-4o to edit my own draft once I was finished with it, and I accepted some of its suggestions.

    In conclusion, current AI can be a useful tool for technical writers when used strategically, but it seems clear that AI is a long way from being able to write a technical manual without significant human collaboration.

    by Jenny Riecken





Next
Next

What is the Role of a Technical Editor?