From its earliest days, Publons™ has championed transparency in peer review. We believe that it is only by fully understanding the peer review process that we can support and recognise underappreciated peer reviewers, identify the faults that plague the system, and improve it.
One such fault is that predatory publishers and journals often claim legitimacy because they carry out peer review. While we know quite a bit about the academics that publish in predatory journals, surprisingly little research has been carried out on the scholars who carry out peer review for them. It could be argued that by carrying out peer review, these reviewers are giving each article (and the journals in turn) a “stamp” of legitimacy.
To plug this gap in our collective knowledge, Publons has been working with the Swiss National Science Foundation on a paper which aims to find out whether there are patterns to reviewer characteristics for predatory journals, and how reviews for potentially predatory journals are distributed globally. The paper was originally due to be presented at the PEERE conference, but as that was cancelled it has been posted on BioArXiv.
We were able to provide valuable data into a previously unseen part of the scholarly ecosystem because researchers sometimes provide Publons evidence of reviews that they performed for journals that are considered to be predatory. Of course, Publons does not endorse these journals, but displays the reviews to provide greater transparency into the peer review practices and communities surrounding their practices.
We provide clear guidance to academics on how to choose a reputable journal, but the line between predator and legitimate publisher isn’t always black or white – sometimes academics thoughtfully and painstakingly carry out reviews without having any idea that the journal is predatory, and we believe their work should be recognised regardless.
The investigation found:
- In a sample size of 183,743 unique Publons reviews claimed by 19,598 reviewers:
- 6,077 reviews were conducted for 1,160 predatory journals (3.31% of all reviews).
- 177,666 were claimed for 6,403 legitimate journals (96.69% of all reviews).
- The vast majority of scholars either never, or only occasionally submitted reviews for predatory journals to Publons (89.96% and 7.55% of all reviewers, respectively). This means their shares of predatory reviews ranged from one to 25% of all reviews.
- Smaller numbers of scholars claimed reviews predominantly or exclusively (between 76% and 99% of reviews) for predatory journals (0.26% and 0.35% of all reviewers, respectively).
- The two latter groups of scholars were less experienced – they had a younger academic age, fewer publications and fewer reviews than the first two groups of scholars.
- Developing regions feature larger shares of reviews for predatory reviews than developed regions.
These findings reflect what we already know about the authors of papers published in predatory journals, who are also more likely to be inexperienced, and from developing regions. While it doesn’t tell us anything about their motivations, we believe it shows a strong need for more education on what makes a journal predatory, and why researchers need to carefully examine every review request they receive. This is job for everyone involved in research – from researchers, to institutions, funders, publishers and peer review platforms alike.
For now, we’re proud to help do our bit in shedding the first light on a very important topic – but watch this space for more research from Publons on review quality and the review infrastructure.
Andrew Preston is co-founder of Publons