The current system
The essential drawbacks of the current system of scientific publishing are all connected to the particular way that peer review is used to evaluate papers. In particular, the current system suffers from a lack of quality and transparency of the peer review process, a lack of availability of evaluative information about papers to the public, and excessive costs incurred by a system, in which private publishers are the administrators of peer review. These problems can all be addressed by open post-publication peer review (OPR). Together with open access (OA), which is generally accepted as desirable, OPR will revolutionize scientific publishing.
Open post-publication peer review (OPR)
Open: Any scientist can instantly publish a peer review on any published paper. The scientist will submit the review to a public repository. Reviews can include written text, figures, and numerical quality ratings. The repository will link each paper to all its reviews, such that that readers are automatically presented with the evaluative meta-information. In addition, the repository allows anyone to rank papers according to a personal objective function computed on the basis of the public reviews and their numerical quality ratings. Peer review is open in both directions: (1) Any scientist can freely submit a review on any paper. (2) Anyone can freely access any review.
Post-publication: Reviews are submitted after publication, because the paper needs to be publicly accessible in order for any scientist to be able to review it. Post-publication reviews can add evaluative information to papers published in the current system (which have already been secretly reviewed before publication). For example, a highly controversial paper appearing in Science may motivate a number of supportive and critical post-publication reviews. The overall evaluation from these public reviews will affect the attention given to the paper by potential readers. The actual text of the reviews may help readers understand and judge the details of the paper.
Peer review: Like the current system of pre-publication evaluation, the new system relies on peer review. For all of its faults, peer review is the best mechanism available for evaluation of scientific papers.
In the future system, peer review is more similar to getting up to comment on a talk presented at a conference. Because these reviews do not decide about publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where uncompelling arguments can prevent publication because editors largely rely on reviewers’ judgments, because there is too little time and no formal mechanism for a judgment of the reviewers’ judgments.
Signed or anonymous: The open peer reviews can be signed or anonymous. Reviews will be digitally authenticated by public-key cryptography. In analyzing the review information to rank papers, signed reviews can be given greater weight if there is evidence that they are more reliable.
Paper selection by arbitrary paper evaluation functions (PEFs): The necessary selection of papers for reading can be based on the reviews and their associated numerical judgments. Any reader can define a PEF based on content and quality criteria and will automatically be informed about papers best conforming to his or her criteria. The PEF could for example, exclude anonymous reviews, exclude certain reviewers, weight evidence for central claims over potential impact of the results etc.
Multiple lenses on the literature
The literature can accessed through webportals that prioritize papers according to different PEFs. Organizations and individuals will define PEFs according to their own priorities. The free definability of PEFs will create a plurality of perspectives of the literature. The continual evolution of multiple PEFs renders the evaluation system “ungamable” because PEFs can be adjusted in response to attempts to game the system.
The nature of a review in the current and future systems
I think there are some really innovative ideas here, but I also think there is some unfounded optimism: “Because these reviews do not decide about publication, they are less affected by politics.”
One might also argue “because these reviews decide whether a paper sinks or swims in the vastly enlarged ocean of publications produced under this system, they are much more prone to politics”.
As a specific example, if anonymous reviews are allowed, and Dr Bloggs sees that Dr Smith has published something very similar to his own recent publication, there will be a temptation for Bloggs to rubbish Smith’s work. One might argue that reviews too will be judged on how compelling their arguments are, but, at least in my experience of writing and reviewing, it is much easier to pick holes in something than it is to write something perfect, so there will often be scope to write well-argued and compelling yet unnecessarily negative reviews, if there is motivation to do so. Similarly, it would be relatively easy to “talk-up” the work of friends and colleagues, even while seeming to give a neutral and justified opinion.
That’s not to say that OPR couldn’t work, just that if it is to work, it needs issues like this to be carefully considered, not brushed aside or buried under overwhelmingly positive hype.
i agree with your comments. no system can eliminate the role of politics completely. and as you say, anonymous reviews could suffer from political motivations.
while anonymous reviews should be allowed, signed reviews will be key: if our reputation is on the line, we have a strong motivation to be fair and balanced. anonymous reviews can be downweighted or disregarded when enough signed reviews are available.
in signed reviews, the temptation to attack competitors and the temptation to praise friends are both strongly reduced. the reviewer’s reputation would suffer if it appeared that he or she were motivated by either.
moreover, social network analyses could be used to assess the independence of the reviews (e.g. by analyzing the degrees of separation of authors and reviewer in the co-publication graph).
After receiving some negative reviews on a paper without the reviewer giving any arguments whatsoever, I fully agree with your assessment and I really like your idea. I guess PLOS is starting with something similar by including a comments section with each paper.
The only problem I see with your approach is that it might only work with high-impact papers. I’m not quite sure about the distribution of downloads and citations for papers, but a large number of publications will probably be read by very few people. If that is the case, the first reviewer might have a huge influence on the publicity of the paper – if he thrashes it, the PEF might fall, thereby keeping the paper from ever coming to my attention. And as I would not see it, I couldn’t counter that first review.
Any ideas on how to tweak and improve the current system? Double-blind peer review, a comments section with every paper and a scrapping of the numbers-for tenure-scheme (the distribution of one project over as many papers as possible is my second pet peeve) would be a start.
sven, good points. most papers will not be read and reviewed extensively. this is why we will still need peer-to-peer editors to assign 3 (not 1) initial reviewers. if the ratings of the initial reviewers are not favorable enough to get the paper read by a few more people (who may then join in and provide further ratings), a paper might get forgotten about — as you say. however, this is a problem that ultimately has no solution: our resources are limited. so for non-high-impact papers, my position is: let’s make the standard 3 reviews public reviews.
ps: plos allows commenting, a small step in the right direction. however, their review process is still prepublication and not open. papers do not come with their reviews and ratings attached to help us prioritize our reading. i like plos for being open access and high-quality in many respects. but open evaluation has yet to be implemented.
[…] The future of scientific publishing: Open post-publication peer review […]
[…] and websites that I found worth reading on that matter are this slide show by IanMulvany, this post by Nikolaus Kriegeskorte, and also the blog by Michael Nielsen, who’s new book I […]
[…] metrics should not be used as a factor in grant decisions and should be replaced by one of the quality-based metrics proposed by Niko Kriegeskorte, or perhaps one of the metrics described in the special issue of […]
[…] also points to this proposal by Niko Kriegeskorte for open post-publication peer review. I don’t quite see how it works […]
[…] The future of scientific publishing: Open post-publication peer review « The future of scientific p… […]