Home » Publications » Guidelines for Generative AI Usage
The developments in generative artificial intelligence (AI) tools, with for example also the large language models (LLMs), are transforming the way publications are produced. We encourage the use of those emerging technologies in a responsible manner. We aim that such AI tools mostly promote researchers’ own capacity to create high-quality scientific work. For instance, AI tools can help researchers arrive at new ideas and improve self-written texts, especially for non-native speakers of English. However, we need to consider that AI tools also raise questions about what exactly constitutes their responsible use.
This page will be updated with new information as the landscape evolves.
Authors must comply with the guidelines on the use and disclosure of content generated by artificial intelligence (AI) specified in the IEEE Publication Services and Products Board Operations Manual:
The use of AI systems for editing and grammar enhancement is common practice and, as such, is generally outside the intent of the above policy. In this case, disclosure as noted above is recommended.
With this document we want to provide more insights in those rules and provide more concrete interpretations from a RAS perspective for reviewers and authors.
For the task of performing reviews of submitted manuscripts, the above rules are very clear. Using AI is NOT allowed. Violation will lead to reports being sent to the IEEE RAS Publication Ethics Committee and IEEE Publication Committee.
Under no circumstances is it permitted to use AI tools to fabricate/manipulate data, code, figures or other research output with the intention of falsifying/distorting research results.
Still, the use of AI tools is allowed and even encouraged in the process of paper writing and revising, within limits. Researchers remain fully responsible for AI outputs as well as the appropriateness of the research process for which AI is used. Responsibility cannot therefore be passed on to AI tools. In practice, this means that researchers can be held accountable for inappropriate use of AI tools, such as failure to comply with the rules regarding scientific integrity (plagiarism, citing sources, etc.). This rule is in accordance with the position of the Committee of Publication Ethics (COPE) that “authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.” (COPE: Committee on Publication Ethics [Internet]. [cited 29 June 2023]. Authorship and AI tools. Available from: https://publicationethics.org/cope-position-statements/ai-author)
ALLEA’s 2023 European Code of Conduct for Research Integrity requires researchers to report the use of AI tools in accordance with the standards of the scientific field to which they belong (ALLEA. The European Code of Conduct for Research Integrity REVISED EDITION 2023. Available from: https://allea.org/wp-content/uploads/2023/06/European-Code-of-Conduct-Revised-Edition-2023.pdf). For RAS publications, AI may not be listed, or cited, as an author. Instead, its use must be described at the dedicated module or part and/or in the acknowledgement.
Figures with the purpose of illustration (for example an impression of future use of a robotic device) may be generated, but the employed AI tool must be named as well as the prompt to generate the figure provided and this needs to be made in the caption. Also here, the authors remain responsible for any possibility of copyright infringement by the AI tool. The authors must also ensure, e.g. by prominent mention in the captions, that readers cannot accidentally confuse the generated impression for real contributions, such as actually developed technology or photos of experiments that were not executed. Authors must also ensure that the generated impression is in line with realistic expectations for any future contribution.
AI-enhanced editing of photos must also not be used to give readers an incorrect impression, even if accidental. For example, it is allowed to remove disturbing clutter in the background that is unrelated to the presented technology. Any editing of photos (performed by AI or not) must be stated in the figure caption in such a way that they are clear to readers.
The use of AI to help authors develop an overview of the state of the art, as a basis for the introduction text of a manuscript is allowed. However, researchers should implement controls on the output of AI tools. For example, various verification steps must be executed. One should always look for the original sources and read them to verify the information for correctness, especially to avoid the so- called hallucinations. Original ideas must also be properly attributed to their creator, not to secondary literature.
Especially for non-native speakers, AI tools can improve the readability of the text. This is the case in for example using AI as a spelling corrector, to reduce the length of a text (for example to comply with page constraints), or also for translating text. Also here, it remains the rule that the authors are responsible for the final result.
Share this page