Filed under:

Tim Green

AI as a Tool in Engineering Editing

Illustration of a person typing. A text-based document is on the screen, and letters, numbers and symbols float outside the monitor.

I am an editor and professional engineer. I teach other engineers how to write good engineering reports. In every session, students ask about the role of AI, if any, in the writing/editing process. 

So what does that mean for an editor editing engineering documents?

Why people use AI

AI is a tool. Why would an engineer or an editor use any tool?

One reason would be to extend our capabilities so we can do something that we are otherwise unable to do — translating a technical report to a different language, for example. This is potentially dangerous for professionals because they hold responsibility for their work regardless of how it is done. Consider the lawyers who never checked the relevant but bogus precedents cited by their hallucinating AI. If we have misplaced trust or insufficient expertise to confirm the quality of the tool’s output, then that is a problem.

A second reason for using a tool would be to save time on an onerous task that we know very well how to do. Using a calculator is a non-AI example in engineering. Of course, the engineer still uses their professional judgment and does an order-of-magnitude check to validate the calculator’s results.

I have spoken with non-engineering editors about how they use AI. Many of the applications they describe fall into the second category above. An editor might use AI for the first pass to create a summary or remove egregious errors in seconds. The output of the first phase then becomes the focus of the editor’s hands-on editing skills in the second phase.

The role of AI in editing engineering work

As a professional engineer, I have a nagging worry about using AI for tasks like composing the first draft of an executive summary or taking the first run through a draft paper written by an inexperienced engineer. 

The most important parts of any document are the ideas it contains. Yes, clear communication is important too, but only in as much as it expresses the ideas correctly and compellingly. If the ideas are fragmented, then tinkering with the expression of them only goes so far. In the extreme for engineering writing, the well-edited presentation of poor ideas can incur professional liability and pose a danger to the public.

In my experience as both editor and engineer, the journey (the writing process) is much more important than the destination (the well-edited written report). Yes, the latter is essential, but it cannot take form until the former is done and done well. One danger, then, may be using AI too early in the writing or editing process, analogous to diving into copy editing of something that really needs developmental editing first.

A real person struggling to write an executive summary might uncover shortcomings (e.g., missing info, data errors or faulty thought processes) in the body of the study. Trusting the first draft of the summary to AI may buff over the inconsistencies or under-appreciate salient ideas that never get critically examined in the subsequent editing process.

How would we know whether AI had been an advantage or had failed us? We could do the same task manually and compare the results but that would take more time.

And if the AI worked well five times, would we bother checking the sixth time? Skipping further checking might be tempting yet we have no reason to expect consistent quality of output from AI given different inputs. If AI failed us the one time we did not check, then the burden of failure would be ours.

Assessing the value of AI

AI is not one homogeneous thing. And it is changing. So how do we know if using AI is a good idea?

I suggest that the decision to use AI — or any other tool — depends on the following principles.

  1. The tool must present some net advantage. (Otherwise, why bother?)
  1. We must be able to check the quality of the tool’s output. (Otherwise, we are shirking our responsibility to our client. And if checking tool output takes longer than not using the tool, then principle 1 may fail.)
  1. We must actually understand the tool’s strengths and weaknesses in our specific application. (Otherwise, we risk failure in principles 1 and 2.)

The same tool-selection principle applies to both editors and engineers. The use of AI in the production of engineering documents requires careful coordination among everyone involved. The impact of any failure along the way is greater for the professional engineer, however, regardless of who performs what function during the evolution of the documents and what tools they use in the process. 

Furthermore, some engineering firms impose restrictions on the use of AI, often to prevent their information from being integrated into AI training material. Therefore, editors must specifically discuss the scope of the AI tools they wish to use, if any, with the client before starting any editing for an engineering project.

See “The Effects of AI on Academic Editing” for a slightly different perspective.

___

Previous post from Tim Green: Podcasting for Fun and Fortune: Part 2 of 2

The Editors’ Weekly is the official blog of Editors Canada. Contact us.


Discover more from The Editors' Weekly

Subscribe to get the latest posts sent to your email.

About the author

Tim Green

Tim Green

Tim Green is an eclectic engineer, editor and storyteller who lives in Whitehorse and explores old copper mines. He’s also a shared recipient of the 1988 Nobel Prize for Peace. You can contact him through https://timmit.ca.

Website

One Comment “AI as a Tool in Engineering Editing”

  • Edit Or

    says:

    Thanks for this wonderful post! I particularly appreciate your approach to the use of these tools: their benefit must first be proven. We are inundated in hype that simply assumes these tools are useful and condemns anyone who suggests otherwise.

    I also fully agree with your point about process versus outcome. It is the writing and editing process that enables the thinking that is necessary to produce a viable text that is worth reading. Without the thinking, the end product will be flawed and, as you say, potentially damaging.

    Some kind of reckoning will eventually happen, I suppose, but the signs are not good: so many producers of good content have disappeared or abandoned their standards that the public and young people may simply get used to low-quality writing. In such a world, the output of these tools may be viewed as “good enough”. Human progress once included the idea of better quality, but now cost and quantity trump all else. We will see!

    Aside: I have a lot of respect for the engineering ethos and professional obligations, and I wish similar principles were applied as rigorously in other professions (I won’t name names, but there aren’t many, so…). We need this kind of accountability in society. A lot of the misrepresentation and outright lies we see today are the result of a collapse in accountability for those with decision-making power. So props to the engineers!

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

To top