1. Home >
  2. Science

Academics Are Seeking Feedback From ChatGPT Because of a Lack of Peer Reviewers

A 'growing interest' in leveraging AI chatbots for peer review prompted one researcher to seek out ChatGPT's feedback on nearly 5,000 papers.
By Adrianna Nine
ChatGPT homepage on an open laptop.
Credit: Choong Deng Xiang/Unsplash

The dwindling number of scientific peer reviewers is resulting in unexpected consequences for researchers. Faced with insufficient input from fellow scientists, researchers are gradually turning to ChatGPT to receive critiques on their work—and they find it more helpful than conventional human feedback.

Academics can’t have their work published in respected scientific journals without first subjecting it to peer review. During this process, researchers from the same field examine the submitted work, comparing it with previously published papers and principles to add depth to—or completely invalidate—the authors’ findings. Peer review is widely thought to strengthen researchers’ work and promote originality, making it a longstanding standard for academic publishing. 

But there’s one little problem: Most journals rely on volunteer reviewers. This means only researchers capable of performing free labor can contribute to the peer review process, thus excluding those who don’t have the means (or incentive) to help. Whether it’s because journals run on thin margins or because they’re attempting to protect the integrity of each review, this model has resulted in a relatively small pool of peer reviewers—which naturally becomes even smaller when researchers require someone from their own field. As a resource, peer reviewers are in scant supply. 

A notepad, pen, and stack of books next to an open laptop.
Credit: Pablohart/Getty Images

James Zou, assistant professor of biomedical data science at Stanford University, wanted to test whether AI chatbots could feasibly replace humans during the peer review process. He and his colleagues gathered nearly 5,000 pre-reviewed research papers from the Nature publishing family and the International Conference on Learning Representations (ICLR). After feeding the PDFs to ChatGPT, Zou’s team compared the chatbot’s feedback with that previously provided by humans. Roughly a third of the points raised by ChatGPT matched those raised by human reviewers, with a slightly higher overlap on the ICLR papers. 

Zou made a point of reaching out to hundreds of researchers whose papers had been involved in his experiment. As revealed by a survey devised by Zou’s team, more than half (57.4%) of these researchers found ChatGPT’s feedback to be “helpful” or “very helpful.” A wide majority (82.4%) even found ChatGPT’s feedback more useful than “at least some” human reviewers’. The survey unfortunately did not account for whether ChatGPT’s feedback was sometimes erroneous, which is kind of important in the age of AI chatbot misinformation

Zou’s research—ironically, only available on the arXiv because it’s still awaiting peer review—reveals an interesting dilemma. Is ChatGPT actually better at providing researchers with feedback than those researchers’ own colleagues are? Or is the prevailing non-compensatory peer review model preventing skilled reviewers from providing high-quality input?

Tagged In

Artificial Intelligence

More from Science

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up