Journal abstracts written with the help of artificial intelligence are perceived as more authentic, clear and compelling than those created solely by academics, a study suggests.
While many academics may scorn the idea of outsourcing article summaries to generative AI, a new investigation by researchers at Ontario’s University of Waterloo found peer reviewers rated abstracts written by humans—but paraphrased using generative AI—far more highly than those authored without algorithmic assistance.
Abstracts written entirely by AI—in which a large language model was asked to provide a summary of a paper—were rated slightly less favorably on qualities such as honesty, clarity, reliability and accuracy, although not significantly so, explains the study, published in the journal Computers in Human Behavior: Artificial Humans.
For instance, the mean score for honesty for an entirely robot-written abstract was 3.32, based on a five-point Likert scale (where 5 is the highest rating), but just 3.38 for a human-written one.
For an AI-paraphrased abstract, it was 3.82, according to the paper, which asked 17 experienced peer reviewers in the field of computer game design to assess a range of abstracts for readability and guess whether they were AI-written.
On some measures, such as perceived clarity and compellingness, entirely AI-written abstracts did better than entirely human-written summaries, although were not seen as superior to AI-paraphrased work.
One of the study’s co-authors, Lennart Nacke, from Waterloo’s Stratford School of Interaction Design and Business, told Times Higher Education that the study’s results showed “AI-paraphrased abstracts were well received” but added that the “researchers should view AI as an augmentation tool” rather than a “replacement for researcher expertise.”
“Although peer reviewers were not able to reliably distinguish between AI and human writing, they were able to clearly assess the quality of underlying research described in the manuscript,” he said.
“You could say that one key takeaway from our research is that researchers should use AI to enhance clarity and precision in their writing. They should not use it as an autonomous content producer. The human researcher should remain the intellectual driver of the work.”
Emphasizing that “researchers should be the primary drivers of their manuscript writing,” Nacke continued, “AI [can] polish language and improve readability, but it cannot replace the deep understanding that comes with years of experience in a research field.”
Stressing the importance of having distinctive academic writing—a desire expressed by several reviewers—he added that, “In our AI era, it’s perhaps more essential than ever to have some human touch or subjective expressions from human researchers in research writing.”
“Because this is really what makes academia a creative, curious and collaborative community,” said Nacke, adding it would be a pity if scholars became “impersonal paper-producing machines.”
“Leave that last part to the Daleks,” he said.