Good Practices for Using Artificial Intelligence When Writing Scientific Manuscripts

Creative science depends on good tool use practices

April 2023

The arrival of artificial intelligence (AI)-powered language tools such as ChatGPT has generated an explosion of interest globally. Many scientific researchers and universities around the world have expressed concern about ChatGPT’s potential to transform scientific communication before we have had time to consider the ramifications of such a tool or verified that the text it generates is actually correct.

The human quality of the text structure produced by ChatGPT may mislead readers into believing that it is of human origin. It is now evident, however, that the generated text may be riddled with errors, may be superficial, and may generate false references and inferences . More importantly, ChatGPT sometimes makes connections that are meaningless and fake .

We have prepared a brief summary of some of the strengths and weaknesses of ChatGPT (and future AI language bots) and conclude with a set of our best practice recommendations for scientists when using such tools at any stage of their research, particularly in the manuscript writing stage. In its current incarnation, ChatGPT is simply an efficient language bot that generates text using linguistic connections. It is, at present, "just a giant autocomplete machine" . It should be acknowledged that ChatGPT draws on its existing database and content and, at the time of writing this editorial, does not include information published or published after 2021 , which restricts its usefulness when applied to writing up-to-date reviews, perspectives and introductions. Therefore, for reviews and perspectives, ChatGPT is poor due to its lack of analytical capabilities that scientists and the experiences they report to us are expected to possess.

The most important concern is that these AI language bots are unable to understand new information, generate insights or deep analysis, which would limit the discussion within a scientific article.

While they appear well formulated, the results are nevertheless superficial , and excessive confidence in the result could stifle creativity in the entire scientific enterprise. AI tools are suitable for regurgitating conventional wisdom , but not for identifying or generating unique results.

Recommendations for the use of AI in scientific communication

  1. Acknowledges the use of an AI/GPT bot to prepare the manuscript. Please clearly indicate which parts of the manuscript used the language bot’s output and provide prompts and questions and/or transcription in the Supporting Information.
     
  2. Remind your co-authors and yourself that, at best, the output of the GPT chat model is simply a very early draft. The output is incomplete, may contain incorrect information, and each sentence and statement must be considered critically. Check, check and check again. And then…, check again .
     
  3. Don’t use word-for-word text from ChatGPT. These are not his words. The bot may also have reused text from other sources, leading to unintentional plagiarism.
     
  4. Any citation recommended by an AI chat/GPT bot should be verified against the original literature, as the bot is known to generate miscitations.
     
  5. Please do not include ChatGPT, or any other AI-based bot, as a co-author. You cannot generate new ideas or compose a discussion based on new results, as that is our proper domain as humans. It is simply a tool, like many other programs to help with the formulation and writing of manuscripts.
     
  6. ChatGPT is not responsible for any ethical statements or breaches. As it stands, all authors of a manuscript share that responsibility.
     
  7. And most importantly, don’t let ChatGPT stifle your creativity and deep thinking. Use it alone to expand your horizons and generate new ideas!