Peer review debate in London

April 3, 2014

There was a fun debate on peer review and its future at the City University, London yesterday. As usual at such events, most of the speakers were from the publishing industry, representing Nature, Elsevier, BMJ, and BMC. This illustrates how we are still largely looking to professionals from the industry to lead us into the future of scientific publishing — despite the fact that web technology gives us all the tools we need to ditch the lame game of secret peer review and define new rules, a better game, for science.

Although most of the speakers were from the industry, Sylvia Tippmann and the other organisers from the Science Journalism programme at City University, London invited two scientists, Peter Ayton and myself.

Here’s the video of the debate and here’s an 3-minute interview they did with me afterwards.




Scientist meets publisher, Episode 2: Open evaluation

February 12, 2013

Scientist meets publisher, Episode 1: Open access

February 12, 2013

An emerging consensus for open evaluation: 18 visions for the future of scientific publishing

October 29, 2012

Diana Deca and I edited a collection of 18 visions for open evaluation and the future of scientific publishing. Our Editorial summarising the whole collection is here. The 18 individual papers are here, including my own vision, which is an elaborated and updated representation of the ideas I started to develop on this blog.


The future of scientific publishing: Open post-publication peer review

November 12, 2009

The current system

The essential drawbacks of the current system of scientific publishing are all connected to the particular way that peer review is used to evaluate papers. In particular, the current system suffers from a lack of quality and transparency of the peer review process, a lack of availability of evaluative information about papers to the public, and excessive costs incurred by a system, in which private publishers are the administrators of peer review. These problems can all be addressed by open post-publication peer review (OPR). Together with open access (OA), which is generally accepted as desirable, OPR will revolutionize scientific publishing.

1_current system

Open post-publication peer review (OPR)

Open: Any scientist can instantly publish a peer review on any published paper. The scientist will submit the review to a public repository. Reviews can include written text, figures, and numerical quality ratings. The repository will link each paper to all its reviews, such that that readers are automatically presented with the evaluative meta-information. In addition, the repository allows anyone to rank papers according to a personal objective function computed on the basis of the public reviews and their numerical quality ratings. Peer review is open in both directions: (1) Any scientist can freely submit a review on any paper. (2) Anyone can freely access any review.

2_future system

Post-publication: Reviews are submitted after publication, because the paper needs to be publicly accessible in order for any scientist to be able to review it. Post-publication reviews can add evaluative information to papers published in the current system (which have already been secretly reviewed before publication). For example, a highly controversial paper appearing in Science may motivate a number of supportive and critical post-publication reviews. The overall evaluation from these public reviews will affect the attention given to the paper by potential readers. The actual text of the reviews may help readers understand and judge the details of the paper.

3_peer-to-peer editing

Peer review: Like the current system of pre-publication evaluation, the new system relies on peer review. For all of its faults, peer review is the best mechanism available for evaluation of scientific papers.

4_incoming reviews

5_reviews backing up a paper

In the future system, peer review is more similar to getting up to comment on a talk presented at a conference. Because these reviews do not decide about publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where uncompelling arguments can prevent publication because editors largely rely on reviewers’ judgments, because there is too little time and no formal mechanism for a judgment of the reviewers’ judgments.

6_paper ratings with error bars

Signed or anonymous: The open peer reviews can be signed or anonymous. Reviews will be digitally authenticated by public-key cryptography. In analyzing the review information to rank papers, signed reviews can be given greater weight if there is evidence that they are more reliable.


Paper selection by arbitrary paper evaluation functions (PEFs): The necessary selection of papers for reading can be based on the reviews and their associated numerical judgments. Any reader can define a PEF based on content and quality criteria and will automatically be informed about papers best conforming to his or her criteria. The PEF could for example, exclude anonymous reviews, exclude certain reviewers, weight evidence for central claims over potential impact of the results etc.

8_ready for high visibility

Multiple lenses on the literature

The literature can accessed through webportals that prioritize papers according to different PEFs. Organizations and individuals will define PEFs according to their own priorities. The free definability of PEFs will create a plurality of perspectives of the literature. The continual evolution of multiple PEFs renders the evaluation system “ungamable” because PEFs can be adjusted in response to attempts to game the system.

9_PEF lenses onto the literature

The nature of a review in the current and future systems

the nature of a review

Brief argument for open post-publication peer review

Full argument for open post-publication peer review

Q: Can’t research blogging serve the function of open post-publication peer review?

March 8, 2009

Short answer: Research blogging is important, but we also need a crystallized scientific record of post-publication reviews.

Research blogging fills an important gap: between informal discussions and formal publications. Unlike a private informal discussion, a blog is publicly accessible. Unlike a scientific paper, a blog post can be altered or removed from public access. Blog posts are also often anonymous, whereas papers are signed and author-authenticated.

These more fluid properties of blogs make for their unique contribution to scientific culture. However, the very fluidity of blogs also makes them inadequate as the sole vessel of scientific publishing. In particular, blogging lacks the quality of “scholarly crystallization”.

A scientific publication needs to be crystallized in the sense that it is a constant historical record that can be accessed permanently and therefore cited.

scholarly crystallization

Crystallized scientific publications include papers and reviews. Reviews are crystallized publications that serve mainly to evaluate one or several other crystallized publications. Crystallized publications are typically digitally authenticated documents that reference other scientific publications.

Crystallization does not mean that the work cannot be revised.

Revisions can be made and a revision can “take precedence” over the previous version of a publication. This means the revision will be the first thing seen by the user. However, the author cannot edit a published paper. Instead a revision is a separately published document linked to the previous version of the paper and accompanied by a “justification statement” that addresses the changes (typically in response to reviewer’s comments). The justification statement is needed for the revision to take precedence over the previous version and inherit its references: The authors of signed reviews are automatically informed about revisions and need to either reiterate or revise their supportive or critical reviews. Revisions may also be limited to two per year.

As a consequence, each publication and each revision requires a substantial commitment of its authors. The entire history of original publications and revisions remains permanently publicly accessible and the authors have no right or ability to remove this record.

Q: Why are there so few post-publication comments in PLoS ONE?

February 13, 2009

PLoS ONE is a lower-tier journal. Most scientists have a backlog of potentially highly important papers to catch up on. It is, thus, to be expected that most papers will get few reviews.

If we ever get the ultimate solution, i.e. the ideal free publication system with open post-publication peer review, most papers will get few reviews just as is the case now for PLoS ONE.

On the one hand, this serves a positive function: The community allocates its resources so as to pay more attention to papers with evidence of high quality. On the other hand, there is the danger that good or great work goes completely ignored.

To avoid the zero-reviews scenario, the traditional editor-soliciting of reviews remains an important mechanism in the new system: It helps get a basic quality estimate for each serious scientific paper, so as to get the broader reception of the paper on its way.

An additional mechanism for the new system is that of author-solicited reviews. This would work similarly to recommendations and could help extremely controversial work to acquire some support. Solicited reviews will need to be marked as author-solicited or editor-solicited, so this information can be taken into account by any automatic paper assessment function.

Open post-publication peer review is distinct from the PLoS ONE system. PLoS ONE combines traditional pre-publication review with post-publication commenting. While it promotes publishing of the pre-publication reviews alongside the paper, it allows reviewers to opt for their reviews to remain secret.

In open post-publication peer review, every review is public, including its numerical ratings. In such a system, each PLoS ONE paper would have two to three editor-solicited reviews at its side (PLoS ONE states that the average number of pre-publication reviews is 2.6) and the papers could be ranked according to their ratings.

In addition, the reviews would assess and rate importance along with technical soundness, whereas in PLoS ONE the reviews assess only technical soundness.

Technical soundness is a low bar, so acceptance in PLoS ONE will not place a random paper high on anyone’s list of reading priorities. The best papers in PLoS ONE probably deserve continued evaluation through peer review. The editor-solited review process should therefore be used to get initial numerical estimates of paper quality on multiple scales including importance. This would allow PLoS ONE fans to prioritize their reading of the journal and comment on the best papers.