"Ethical framework" to guide use of AI in research

Share this on social media:

Shutterstock.com/BOY ANTHONY

Researchers from a group of prestigious universities have devised guidelines for using large language models (LLMs) in academic writing. 

As Large Language Models (LLMs) become more common and easier to use, researchers around the world are using them to help with writing academic papers, especially for brainstorming and creating content. However, this raises concerns about plagiarism, who should be credited as the author, and the overall trustworthiness of academic work.

The new research from authors at University of Oxford, University of Cambridge, University of Copenhagen, National University of Singapore, and other leading institutions – and published in Nature Machine Intelligence – highlights three criteria that maximise the beneficial impacts of LLMs on scientific advancement and academic equity:

  • Human vetting and guaranteeing of accuracy and integrity;
  • Ensuring substantial human contribution to the work; and
  • Appropriate acknowledgment and transparency of LLM use.

The authors created a standard template for acknowledging the use of LLMs, which researchers can use when submitting their papers. This tool will make it easier for researchers to follow ethical guidelines in AI-assisted writing and help make it clearer when LLMs have been used.

Speaking of the guidelines, co-author, Professor Julian Savulescu, of The Uehiro Oxford Institute, said: ‘Large Language Models are the Pandora's Box for academic research. They could eliminate academic independence, creativity, originality and thought itself. But they could also facilitate unimaginable co-creation and productivity. These guidelines are the first steps to using LLMs responsibly and ethically in academic writing and research.’

Co-author, Dr Brian Earp, of The Uehiro Oxford Institute, said: ‘It's appropriate and necessary to be extremely cautious when faced with new technological possibilities, including the ability for human writers to co-create academic material using generative AI. This is especially true when things are scaling up and moving quickly. But ethical guidelines are not only about reducing risk; they are also about maximizing potential benefits.’

Professor Timo Minssen from the University of Copenhagen said: ‘Guidance is essential in shaping the ethical use of AI in academic research, and in particular concerning the co-creation of academic articles with LLMs. Appropriate acknowledgment based on the principles of research ethics should ensure transparency, ethical integrity, and proper attribution. Ideally, this will promote a collaborative and more inclusive environment where human ingenuity and machine intelligence can enhance scholarly discourse.’

This new research presents opportunities for academic communities worldwide,  and can be used across all academic disciplines.

The paper 'Guidelines for ethical use and acknowledgement of large language models in academic writing’ has been published in Nature Machine Intelligence.

Related news