I am an editor and professional engineer. I teach other engineers how to write good engineering reports. In every session, students ask about the role of AI, if any, in the writing/editing process.
So what does that mean for an editor editing engineering documents?
Why people use AI
AI is a tool. Why would an engineer or an editor use any tool?
One reason would be to extend our capabilities so we can do something that we are otherwise unable to do — translating a technical report to a different language, for example. This is potentially dangerous for professionals because they hold responsibility for their work regardless of how it is done. Consider the lawyers who never checked the relevant but bogus precedents cited by their hallucinating AI. If we have misplaced trust or insufficient expertise to confirm the quality of the tool’s output, then that is a problem.
A second reason for using a tool would be to save time on an onerous task that we know very well how to do. Using a calculator is a non-AI example in engineering. Of course, the engineer still uses their professional judgment and does an order-of-magnitude check to validate the calculator’s results.
I have spoken with non-engineering editors about how they use AI. Many of the applications they describe fall into the second category above. An editor might use AI for the first pass to create a summary or remove egregious errors in seconds. The output of the first phase then becomes the focus of the editor’s hands-on editing skills in the second phase.
The role of AI in editing engineering work
As a professional engineer, I have a nagging worry about using AI for tasks like composing the first draft of an executive summary or taking the first run through a draft paper written by an inexperienced engineer.
The most important parts of any document are the ideas it contains. Yes, clear communication is important too, but only in as much as it expresses the ideas correctly and compellingly. If the ideas are fragmented, then tinkering with the expression of them only goes so far. In the extreme for engineering writing, the well-edited presentation of poor ideas can incur professional liability and pose a danger to the public.
In my experience as both editor and engineer, the journey (the writing process) is much more important than the destination (the well-edited written report). Yes, the latter is essential, but it cannot take form until the former is done and done well. One danger, then, may be using AI too early in the writing or editing process, analogous to diving into copy editing of something that really needs developmental editing first.
A real person struggling to write an executive summary might uncover shortcomings (e.g., missing info, data errors or faulty thought processes) in the body of the study. Trusting the first draft of the summary to AI may buff over the inconsistencies or under-appreciate salient ideas that never get critically examined in the subsequent editing process.
How would we know whether AI had been an advantage or had failed us? We could do the same task manually and compare the results but that would take more time.
And if the AI worked well five times, would we bother checking the sixth time? Skipping further checking might be tempting yet we have no reason to expect consistent quality of output from AI given different inputs. If AI failed us the one time we did not check, then the burden of failure would be ours.
Assessing the value of AI
AI is not one homogeneous thing. And it is changing. So how do we know if using AI is a good idea?
I suggest that the decision to use AI — or any other tool — depends on the following principles.
- The tool must present some net advantage. (Otherwise, why bother?)
- We must be able to check the quality of the tool’s output. (Otherwise, we are shirking our responsibility to our client. And if checking tool output takes longer than not using the tool, then principle 1 may fail.)
- We must actually understand the tool’s strengths and weaknesses in our specific application. (Otherwise, we risk failure in principles 1 and 2.)
The same tool-selection principle applies to both editors and engineers. The use of AI in the production of engineering documents requires careful coordination among everyone involved. The impact of any failure along the way is greater for the professional engineer, however, regardless of who performs what function during the evolution of the documents and what tools they use in the process.
Furthermore, some engineering firms impose restrictions on the use of AI, often to prevent their information from being integrated into AI training material. Therefore, editors must specifically discuss the scope of the AI tools they wish to use, if any, with the client before starting any editing for an engineering project.
See “The Effects of AI on Academic Editing” for a slightly different perspective.
___
Previous post from Tim Green: Podcasting for Fun and Fortune: Part 2 of 2
The Editors’ Weekly is the official blog of Editors Canada. Contact us.
Discover more from The Editors' Weekly
Subscribe to get the latest posts sent to your email.