ChatGPT can write Back Pages that fool readers

3 minute read


Let’s test this clever little writing model.


There’s been a lot of chat lately about ChatGPT, OpenAI’s language model.

Many are frankly freaked out at the prospect of being supplanted by some code, but not your Back Page correspondent, who is already on record as welcoming, on behalf of all journalists, our new writing-algorithm overlords.

Now it’s researchers’ turn to get nervous.

A recent study published in the journal Nature has demonstrated the impressive capabilities of the language model ChatGPT in generating abstracts of research articles. The study, conducted by a team of researchers from OpenAI, used a dataset of real abstracts to test ChatGPT’s ability to generate abstracts in various fields such as physics, computer science, and medicine.

The results of the study were striking, as ChatGPT was able to generate abstracts that were almost indistinguishable from those written by human researchers. In fact, when the generated abstracts were presented to a group of researchers in the corresponding field, many of them thought that the abstracts were written by a human and were not able to identify them as machine-generated. This highlights the sophisticated level of language proficiency that ChatGPT has achieved, and its potential to revolutionize the way scientific content is generated.

However, the study also raises important ethical questions about the use of language models in scientific writing. The authors of the study warn that the use of language models in scientific writing could lead to the dissemination of false or misleading information, and could also have negative consequences for the scientific community, as it could undermine the credibility of research. Therefore, it is important to consider the potential ethical implications and to ensure that the use of language models in scientific writing is done responsibly.

In conclusion, the study demonstrates that ChatGPT’s ability to generate highly convincing abstracts of research articles that can fool researchers. This technology has the potential to streamline the process of scientific writing, but it is crucial to consider the ethical implications and to ensure that the use of language models in scientific writing is done responsibly.

… Well, what do you think?

The preceding four paragraphs is ChatGPT’s unedited response to the prompt “Write 300 words about the fact that ChatGPT can write journal article abstracts that fool researchers https://www.nature.com/articles/d41586-023-00056-7”.

It may lack the usual Back Page flair, admittedly, but it sure saved me time reading the Nature paper and writing the summary myself.

Now that I’ve conclusively demonstrated that writing AIs are A Good Thing for journalists, I’m going to lunch.

If you see some tech that could improve your daily life, tell penny@medicalrepublic.com.au.

End of content

No more pages to load

Log In Register ×