From fatigue to fulfillment

Share this on social media:

Ashutosh Ghildiyal is VP, Strategy and Growth at Integra Software Services

Ashutosh Ghildiyal discusses combating reviewer burnout in scholarly publishing

We all know what reviewer burnout is – when reviewers become overwhelmed by the number of review requests, leading to declining invitations or taking too long to return reviews. However, that’s just a symptom of burnout.

The cause of burnout – which manifests as a lack of interest, procrastination, or sub-par reviews – stems from a lack of motivation. Burnout is not only physical but also psychological and cognitive. It often arises from engaging in activities that feel devoid of meaning. Fatigue or burnout is generally caused not by workload alone, but by a lack of enjoyment in the work.

This lack of pleasure can result from the absence of perceived rewards, both intrinsic and extrinsic. When the brain senses "there’s too much tedious work with no gain," burnout ensues. Ultimately, burnout is about a lack of meaning and fulfillment.

Addressing burnout requires reducing cognitive friction and creating space for experiential fulfillment. This happens when our attention is freed, allowing us to explore meaning, significance, and new discoveries in our work. It also involves the equitable distribution of work. Expanding the reviewer pool to include more contributors from non-Western countries, such as India and China, can help. Currently, most active reviewers are from Western countries, leaving a large untapped capacity in other regions.

Key challenges

Beyond expanding the reviewer pool, what else can be done to address reviewer burnout? I see two key challenges:

  1. Getting reviewers to accept review invitations.

  2. Getting reviewers to start and complete reviews on time.

Let’s address the first challenge: How can we make reviewers want to participate in peer review?

According to an article published by Springer Nature: “Peer review is increasingly perceived as a distraction, a detour from the path to recognition and career progression.” This shift in perception has led to a gradual decline in enthusiasm for participating in the peer review process, as the potential rewards seem to pale compared to the effort and time invested.

Why is peer review seen as a distraction? Reviewing is hard work – it requires burning mental energy, attention, and cognitive resources. In today’s world, where attention is scarce, asking for rigorous focus without making the effort worthwhile doesn’t go far. If researchers don’t see value in reviewing (i.e., the effort-to-reward ratio), they may view it as a distraction and avoid it.

So, we need to answer the question: “What’s in it for them?” Good karma just doesn’t cut it anymore. Reviewers need to want to do it. Being a researcher is a career with its own struggles for recognition and success. If peer review doesn’t help reviewers with their careers, they may not be motivated to do it. Peer review needs to be more rewarding – both intrinsically and extrinsically.

Monetary incentives are secondary; recognition is more important. The fundamental need of human beings is recognition – the desire to feel important and appreciated. Currently, all the glory goes to the authors, and the profits to the publishers. But what do reviewers get? Recognition, such as linking their name to the review, can be a good incentive, but we need to do more. Reviewers should feel that reviewing enhances their self-worth and social standing. An invitation to review should evoke the same sense of importance as an invitation to write an editorial or opinion piece.

If we don’t find an impactful way to recognise reviewers, we may have no option but to move toward mostly AI-based peer review in the near future. However, whether authors will accept AI as peer reviewers is another question – one we would need to explore through low-risk experiments. If authors reject AI review, we’ll need to double down on efforts to expand the reviewer pool, train more researchers, and offer strong incentives to motivate peer reviewers.

Marketing peer review better could also make it more attractive. Targeted campaigns could engage researchers and make peer review an aspirational goal. Thoughtful campaigns, similar to iconic ones like "Got Milk?" or "Think Different," could create excitement and pride around peer review. Global initiatives like Peer Review Week should be expanded to increase participation, and the perception that reviewers are exploited must be addressed by making the process more rewarding – a true win-win.

Time is of the essence

Now, to the second challenge: getting reviewers to start and complete reviews on time. This can be partly solved by providing stronger incentives. But to reduce review delays, we must help reviewers spend less time on peer review.

Some argue that scientists spend too much time reviewing papers when they could be focusing on research. I agree – we need to cut down that time to a few hours, and AI can help achieve this.

AI has arrived just in time, as the scale of scholarly publishing has become overwhelming. What’s truly taxing is not the cognitive effort, but the cognitive friction caused by meaninglessness – by tasks that provide no satisfaction. If we remove the tedious, repetitive, cognitively meaningless tasks from peer review, we can alleviate burnout and create space for more experiential fulfillment.

Using AI before and during the review process – at the editorial and review stages – can significantly reduce both cognitive friction and the time it takes to complete peer reviews. AI can improve the matching of manuscripts with reviewers, ensuring reviewers receive papers that align with their expertise and interests. This makes the process more engaging and intellectually rewarding, leading to faster turnarounds. Reducing review times from months to days, or even hours, should be the goal.

How do we use AI wisely, balancing its usefulness with potential unintended consequences? AI is proficient at data processing and verbal dexterity, but it lacks true intelligence—the kind that includes knowing when not to cooperate, understanding nuance, being skeptical, and showing empathy. While AI can assist with pattern recognition based on existing knowledge, it cannot discover the unknown in the way humans can through insight.

AI can be a highly capable tool for improving manuscript screening and review matching. It can assist reviewers by reducing cognitive load, allowing them to focus on the intellectual aspects of reviewing. However, we need to ensure that reviewers don’t become overly reliant on AI. AI-generated comments can sometimes resemble those of human experts, but missing human attention could be detrimental. Reviewers must still critically engage with papers themselves.

Finally, the question remains: How can we ensure quality in peer review? Several solutions emerge, including reviewer training, reviewer ratings, and AI tools to assist with writing reviews. Some reviewers, both native and non-native English speakers, may struggle to articulate their thoughts clearly. AI can help structure and edit their notes into coherent reports.

Reviewer fatigue is already driving some researchers to use AI for peer review. Instead of letting reviewers use AI independently, publishers should offer secure, customised tools specifically designed for peer review. But ultimately, peer review is about trust – the trust that an expert has vetted the work. Ensuring research integrity and reproducibility is vital, and AI can assist with this. However, peer reviewers must still focus on novelty and quality, helping improve the paper.

AI is no longer optional but necessary due to the sheer volume of global publications and increasing reviewer fatigue. From manuscript screening to reviewer support and technical assessment, AI can reduce the time and effort involved in peer review. However, whether 100% AI-based peer review will be accepted remains uncertain, as researchers still prefer human feedback.

In conclusion, AI should assist with peer review, but human reviewers are essential for ensuring quality. AI-generated reports should not be labelled "peer review" but could instead be termed "AI-based technical assessments" or "developmental reports." We need to strike a balance where AI supports the process and reduces administrative burdens, while human reviewers bring the critical insights that only they can provide.

Ashutosh Ghildiyal is VP, Strategy and Growth at Integra Software Services