ChatGPT’s capabilities are irrefutable. However, when asked to edit the language of scientific papers, it can make mistakes. Crucially, ChatGPT best guesses the next word; it cannot deduce the author’s intended meaning in the same way an experienced subject-matter expert can, and it cannot leave comments asking for clarification as a human editor would do. In this blog, we present the mistakes we found when we tested ChatGPT against our human editors.
ChatGPT can change the author’s intended meaning.
In the example above, ChatGPT changed “poor activity” to “limited mobility,” potentially changing the author’s meaning. Our editor was unsure of the expression “poor activity” in the context presented here and therefore added a comment asking the author to explain their intended meaning. This illustrates one of the key downsides of ChatGPT – it cannot engage in a back-and-forth with the author to ensure their intended meaning is conveyed; it presents its best guess.
In the example above, ChatGPT excluded the text “and the inferior vena cava,” potentially changing the author’s meaning.
In the example above, ChatGPT changed the text “after the second round of hospital consultation” to “after consulting the hospital for a second opinion,” slightly changing the author’s meaning. The difference here is subtle but potentially important. The unedited version, i.e., “after the second round of hospital consultation,” implies that the second round of consultation had been planned as part of the methodology; ChatGPT’s edit suggests that the “second opinion” was requested only after receiving the first consultation.
In the example above, ChatGPT missed the author’s intended meaning, i.e., a description of the criteria of the study. Instead, ChatGPT presented the information as a general truth.
In the example above, the authors were trying to give an overview of the MTT assay. ChatGPT’s edit, however, made it appear that the authors had used the MTT assay to measure cell survival and growth in the study, which is incorrect. ChatGPT’s editing of the last sentence, “This process is not available in dead cells,” is also poorly phrased.
In the example above, the authors were trying to present a description of previous works completed. ChatGPT misinterpreted this and presented the information as if it were a description of the aims of the study. In addition, our editor left a comment for the authors suggesting they add a citation for the mentioned works.
ChatGPT can add words inappropriately.
In the example above, ChatGPT inappropriately added the word “initially.”
In the example above, ChatGPT inappropriately added the word “however,” thus implying that an increase in antenna size is a downside of the approach, which was not the author’s intended meaning.
In the example above, ChatGPT inappropriately added the word “initially.” In addition, “After unearthing 1-2 cm of the broad bean seedlings …” is poorly phrased, and ChatGPT failed to change “Aphis” to “aphids.”
In the example above, ChatGPT added a description of why the methodology, “qPCR screening,” was used, resulting in some redundancy.
ChatGPT can make poor word choices.
In the example above, ChatGPT inappropriately changed “allogenic” to “exogenous,” possibly because allogenic was spelled wrong in the original, unedited version.
ChatGPT changed “single use” to “individually” in the example above, making the red text nonsensical.
In the example above, our editor improved the clarity of the text by changing “intercept” to “trap.” Also, GPT failed to correct the typo, i.e., change “NF elements” to “NM elements.” Our editor knew to correct this, as they had access to the full paper; ChatGPT has a word limit.
In the example above, ChatGPT failed to change the inappropriate use of the word “construction” and failed to improve the grammar of the phrase “under challenging habitats.”
ChatGPT can fail to improve clarity.
In the example above, our editor improved the overall flow and clarity of the text much better than ChatGPT. In addition, ChatGPT failed to use the numeral for units of measurement, e.g., “one year” should be presented as “1 year.”
In the example above, our editor improved the clarity of the text better than ChatGPT.
In the example above, our editor makes it very clear what the authors did. By retaining the awkward phrasing “comparative experiment,” ChatGPT failed to improve the clarity of the text.
ChatGPT can use the wrong tense.
In this example, ChatGPT changed the tense to the present tense. Results are traditionally presented in the past tense.
ChatGPT can use the wrong subject-verb agreement.
In the example above, ChatGPT used the plural “were” with the singular “1.8 grams of double-distilled water,” which is incorrect.
In the example above, ChatGPT used the singular “undergoes” with the plural “viscera,” which is incorrect.
ChatGPT can delete information.
In the example above, ChatGPT deleted information, i.e., “softens berries.”
ChatGPT can inappropriately rearrange the text.
In the example above, ChatGPT rearranged the text to imply that the downside of the proposed method is that the profile of the antenna is increased. This does not convey the author’s intended meaning.
Other examples of poor grammar by ChatGPT
In the example above, “… has unclear toxicity and mechanism” is poorly phrased.
In the example above, “… operates on a similar mechanism” is awkward phrasing.
In the example above, ChatGPT failed to adhere to a typical journal formatting rule which specifies that Latin names should be written out in full if they begin a sentence.
A compound adjective in which the first word ends in ‘ly’ should not contain a hyphen. ChatGPT failed to correct this.
ChatGPT can do odd things with references.
In the example above, ChatGPT changed the in-text citations from numbers to text; however, these were the wrong references.
In the example above, ChatGPT added a reference that was not in the original, unedited version.
In the example above, ChatGPT removed the references “Lo et al., 2010” and “Tei et al., 2017,” possibly because the opening parenthesis was in the font SimSun in the original, unedited version.
In the example above, ChatGPT changed the numbered in-text citation to “Author, 2021.” This happened several times during our testing of ChatGPT.
In the example above, ChatGPT changed the in-text citation from “Linlin et al.” to “Chen et al.” It is understandable why ChatGPT may have done this; however, it is not clear what the authors meant here. Our editor left a comment asking the authors to clarify.
ChatGPT changed “Lffler” to the more common spelling Löffler; however, this was not correct in this case.
In conclusion …
Large language models like ChatGPT will undoubtedly continue to improve. For now, however, human editors remain the best way to get your papers publication-ready.