AI in research and publishing workflows: a new paradigm

Share this on social media:

Dave Flanagan outlines how artificial intelligence is being used to enhance productivity and innovation

Artificial Intelligence, particularly generative AI (GenAI) has become a game-changer in research and publishing workflows.

These systems, trained on vast datasets, make predictions and assist in decision-making processes. GenAI models like GPT-4 can generate human-like text by predicting the next word in a sequence, aiding in tasks such as writing papers, answering questions, and creating dialogues, mimicking human creativity and communication.

I see publishers embracing AI in a safe and ethical way and we of course recognize that AI assists people, it does not replace people. This is reflected in Wiley's framework and philosophy: our AI tools are designed to be human-centric and transparent about their responsible use (and limitations), while being adaptable and easy to integrate into existing workflows. This approach ensures that AI's outputs are critically assessed and verified by humans, maintaining the integrity of the publishing process.

Publishers are also working with industry bodies such as the STM Association and the Committee for Publication Ethics (COPE) to adopt best practices and standards for safety and ethics around how AI tools are used. 

What does this mean for researchers in practice?

The publishing industry is at the very beginning of our journey into GenAI, so right now we are being vigilant. Perhaps, in five years, we’ll look back and think we were being too cautious. Only time will tell. 

For now, STM has general guidance on Generative AI in Scholarly Communications for all participants in scholarly publishing which addresses the role of generative AI technologies. Publishers like Wiley also have best practice guidelines which address these topics.  

For authors, any use of a GenAI tool must be described in their paper in detail (e.g. in the methods section, acknowledgements or via a disclosure statement). The author also has to be fully responsible for the accuracy of any information provided by an AI tool and for correctly referencing any supporting work - as we know that GenAI can hallucinate and make things up. But tools that are used to improve spelling and grammar are fine. Of course, GenAI tools must not be used to create, alter or manipulate original research data and results. 

Tools cannot take responsibility for content. GenAI cannot initiate original research without human direction, be accountable for what is produced, or have legal standing or the ability to hold or assign copyright. Therefore, in accordance with COPE’s position statement on Authorship and AI tools, these tools cannot fulfil the role of  an author of an article.

What does this mean for peer review?

It’s vital to uphold confidentiality in the peer review process; only the editor, the peer reviewer and the author know what is in the paper. Editors or peer reviewers should not upload manuscripts (or any parts of manuscripts including figures and tables) into GenAI tools or services. GenAI tools may use input data for training or other purposes, which could breach the confidentiality of the peer review process, privacy of authors and reviewers, and the copyright of the manuscript under review. However, a GenAI tool can be used by an editor or peer reviewer to improve the quality of the written feedback in a peer review report, and this should be declared in the report.

How can AI support peer review?

One of the ways we are exploring how technology can support peer review is through our new AI-powered Papermill Detection Service. The tool scans submissions for signs of fabrication, comparing them to known papermill papers and checking for irregular patterns. It serves as an early warning system, flagging potentially fraudulent papers for further scrutiny.

Another AI-powered tool that we’re developing at Wiley will support editors in identifying appropriate peer reviewers for any particular paper. This is beneficial for editors and peer reviewers alike – making peer review selection less time consuming and cutting down on misguided review requests. 

Similarly, when authors submit a paper, if it’s not the correct fit for the scope of the journal at hand, we can utilise AI to automatically suggest other potential outlets for publishing their research. 

Looking ahead, there are various ways that publishers can use AI to support peer reviewers. We're exploring AI applications to:

  • Assist authors in improving the clarity and quality of their manuscripts;

  • Help editors identify the most suitable reviewers for each paper;

  • Streamline the formatting and reference checking process; and

  • Enhance the discoverability of published research.

Are there future opportunities for AI in peer review?

Yes! AI's potential in research publishing is vast. There are various ways in which GenAI can assist with supporting peer review including by summarizing revision notes, comparing original and revised manuscripts, and verifying whether authors have addressed reviewers' comments adequately. This significantly reduces the administrative burden on editors, allowing them to focus on areas that require human expertise, such as content development.

Similarly, an AI reviewer ‘coach’ could provide real-time feedback to reviewers while they are completing their review, helping them improve their reports. This ensures that the feedback authors receive is constructive and thorough, ultimately improving the quality of the research published.

How publishers are working to advance AI

Publishers need to do more than just set restrictions - we need to proactively partner with researchers to shape the responsible use of AI in academia. At Wiley, we are Investing in building AI tools specifically designed for academic workflows, with transparency and integrity built in.

As we celebrate Peer Review Week 2024, it's evident that AI and technology are integral to the future of peer review. Publishers must empower researchers to leverage these powerful tools ethically and effectively, keeping the advancement of knowledge as our primary goal. 

Dave Flanagan is Senior Director, Data Science, at Wiley