Peer review debate in London

April 3, 2014

There was a fun debate on peer review and its future at the City University, London yesterday. As usual at such events, most of the speakers were from the publishing industry, representing Nature, Elsevier, BMJ, and BMC. This illustrates how we are still largely looking to professionals from the industry to lead us into the future of scientific publishing — despite the fact that web technology gives us all the tools we need to ditch the lame game of secret peer review and define new rules, a better game, for science.

Although most of the speakers were from the industry, Sylvia Tippmann and the other organisers from the Science Journalism programme at City University, London invited two scientists, Peter Ayton and myself.

Here’s the video of the debate and here’s an 3-minute interview they did with me afterwards.

 

 

 

Advertisement

Scientist meets publisher, Episode 2: Open evaluation

February 12, 2013


Scientist meets publisher, Episode 1: Open access

February 12, 2013


An emerging consensus for open evaluation: 18 visions for the future of scientific publishing

October 29, 2012

Diana Deca and I edited a collection of 18 visions for open evaluation and the future of scientific publishing. Our Editorial summarising the whole collection is here. The 18 individual papers are here, including my own vision, which is an elaborated and updated representation of the ideas I started to develop on this blog.

Image


The future of scientific publishing: Open post-publication peer review

November 12, 2009

The current system

The essential drawbacks of the current system of scientific publishing are all connected to the particular way that peer review is used to evaluate papers. In particular, the current system suffers from a lack of quality and transparency of the peer review process, a lack of availability of evaluative information about papers to the public, and excessive costs incurred by a system, in which private publishers are the administrators of peer review. These problems can all be addressed by open post-publication peer review (OPR). Together with open access (OA), which is generally accepted as desirable, OPR will revolutionize scientific publishing.

1_current system

Open post-publication peer review (OPR)

Open: Any scientist can instantly publish a peer review on any published paper. The scientist will submit the review to a public repository. Reviews can include written text, figures, and numerical quality ratings. The repository will link each paper to all its reviews, such that that readers are automatically presented with the evaluative meta-information. In addition, the repository allows anyone to rank papers according to a personal objective function computed on the basis of the public reviews and their numerical quality ratings. Peer review is open in both directions: (1) Any scientist can freely submit a review on any paper. (2) Anyone can freely access any review.

2_future system

Post-publication: Reviews are submitted after publication, because the paper needs to be publicly accessible in order for any scientist to be able to review it. Post-publication reviews can add evaluative information to papers published in the current system (which have already been secretly reviewed before publication). For example, a highly controversial paper appearing in Science may motivate a number of supportive and critical post-publication reviews. The overall evaluation from these public reviews will affect the attention given to the paper by potential readers. The actual text of the reviews may help readers understand and judge the details of the paper.

3_peer-to-peer editing

Peer review: Like the current system of pre-publication evaluation, the new system relies on peer review. For all of its faults, peer review is the best mechanism available for evaluation of scientific papers.

4_incoming reviews

5_reviews backing up a paper

In the future system, peer review is more similar to getting up to comment on a talk presented at a conference. Because these reviews do not decide about publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where uncompelling arguments can prevent publication because editors largely rely on reviewers’ judgments, because there is too little time and no formal mechanism for a judgment of the reviewers’ judgments.

6_paper ratings with error bars

Signed or anonymous: The open peer reviews can be signed or anonymous. Reviews will be digitally authenticated by public-key cryptography. In analyzing the review information to rank papers, signed reviews can be given greater weight if there is evidence that they are more reliable.

7_PEF

Paper selection by arbitrary paper evaluation functions (PEFs): The necessary selection of papers for reading can be based on the reviews and their associated numerical judgments. Any reader can define a PEF based on content and quality criteria and will automatically be informed about papers best conforming to his or her criteria. The PEF could for example, exclude anonymous reviews, exclude certain reviewers, weight evidence for central claims over potential impact of the results etc.

8_ready for high visibility

Multiple lenses on the literature

The literature can accessed through webportals that prioritize papers according to different PEFs. Organizations and individuals will define PEFs according to their own priorities. The free definability of PEFs will create a plurality of perspectives of the literature. The continual evolution of multiple PEFs renders the evaluation system “ungamable” because PEFs can be adjusted in response to attempts to game the system.

9_PEF lenses onto the literature

The nature of a review in the current and future systems

the nature of a review

Brief argument for open post-publication peer review

Full argument for open post-publication peer review


Q: Can’t research blogging serve the function of open post-publication peer review?

March 8, 2009

Short answer: Research blogging is important, but we also need a crystallized scientific record of post-publication reviews.

Research blogging fills an important gap: between informal discussions and formal publications. Unlike a private informal discussion, a blog is publicly accessible. Unlike a scientific paper, a blog post can be altered or removed from public access. Blog posts are also often anonymous, whereas papers are signed and author-authenticated.

These more fluid properties of blogs make for their unique contribution to scientific culture. However, the very fluidity of blogs also makes them inadequate as the sole vessel of scientific publishing. In particular, blogging lacks the quality of “scholarly crystallization”.

A scientific publication needs to be crystallized in the sense that it is a constant historical record that can be accessed permanently and therefore cited.

scholarly crystallization


Crystallized scientific publications include papers and reviews. Reviews are crystallized publications that serve mainly to evaluate one or several other crystallized publications. Crystallized publications are typically digitally authenticated documents that reference other scientific publications.

Crystallization does not mean that the work cannot be revised.

Revisions can be made and a revision can “take precedence” over the previous version of a publication. This means the revision will be the first thing seen by the user. However, the author cannot edit a published paper. Instead a revision is a separately published document linked to the previous version of the paper and accompanied by a “justification statement” that addresses the changes (typically in response to reviewer’s comments). The justification statement is needed for the revision to take precedence over the previous version and inherit its references: The authors of signed reviews are automatically informed about revisions and need to either reiterate or revise their supportive or critical reviews. Revisions may also be limited to two per year.

As a consequence, each publication and each revision requires a substantial commitment of its authors. The entire history of original publications and revisions remains permanently publicly accessible and the authors have no right or ability to remove this record.


Q: Why are there so few post-publication comments in PLoS ONE?

February 13, 2009

PLoS ONE is a lower-tier journal. Most scientists have a backlog of potentially highly important papers to catch up on. It is, thus, to be expected that most papers will get few reviews.

If we ever get the ultimate solution, i.e. the ideal free publication system with open post-publication peer review, most papers will get few reviews just as is the case now for PLoS ONE.

On the one hand, this serves a positive function: The community allocates its resources so as to pay more attention to papers with evidence of high quality. On the other hand, there is the danger that good or great work goes completely ignored.

To avoid the zero-reviews scenario, the traditional editor-soliciting of reviews remains an important mechanism in the new system: It helps get a basic quality estimate for each serious scientific paper, so as to get the broader reception of the paper on its way.

An additional mechanism for the new system is that of author-solicited reviews. This would work similarly to recommendations and could help extremely controversial work to acquire some support. Solicited reviews will need to be marked as author-solicited or editor-solicited, so this information can be taken into account by any automatic paper assessment function.

Open post-publication peer review is distinct from the PLoS ONE system. PLoS ONE combines traditional pre-publication review with post-publication commenting. While it promotes publishing of the pre-publication reviews alongside the paper, it allows reviewers to opt for their reviews to remain secret.

In open post-publication peer review, every review is public, including its numerical ratings. In such a system, each PLoS ONE paper would have two to three editor-solicited reviews at its side (PLoS ONE states that the average number of pre-publication reviews is 2.6) and the papers could be ranked according to their ratings.

In addition, the reviews would assess and rate importance along with technical soundness, whereas in PLoS ONE the reviews assess only technical soundness.

Technical soundness is a low bar, so acceptance in PLoS ONE will not place a random paper high on anyone’s list of reading priorities. The best papers in PLoS ONE probably deserve continued evaluation through peer review. The editor-solited review process should therefore be used to get initial numerical estimates of paper quality on multiple scales including importance. This would allow PLoS ONE fans to prioritize their reading of the journal and comment on the best papers.


Q: How can scientists be motivated to submit reviews in an open peer review system?

February 13, 2009

Scientists accept requests to review papers in the current system – this will not change. In the current system, scientists are approached by editors and asked to review new papers. They usually comply. In the new system, they will be approached similarly often with the same request – only the reviews will be public.

The motivation to review a paper is greater if the review is an open letter to the community. The fact that reviews are public makes reviewing a more meaningful and motivating activity.

In terms of power, the reviewer loses and gains: The reviewer loses the power to prevent or promote the publication of a paper by means of a secret review. The reviewer gains the power to speak to the whole community about the merits and shortcomings of the paper. The power lost is the kind of power that corrupts, the power gained is the kind of power that challenges us in a positive way.

With power comes responsibility. Again, the reviewer loses and gains: The reviewer loses the responsibility to decide the fate of the paper. This kind of responsibility is an ethical burden and creates political conflicts of interest. (The ethical burden is exacerbated by the fact that the judgment of two to four reviewers often turns out to be mistaken: it takes more minds and time to assess scientific advances.) The reviewer gains the responsibility to contribute to the community’s understanding of the new paper. This kind of responsibility is an honor and part of the collective cognitive process of the scientific community.

Reviewers may choose to post their pre-publication reviews written for the current system. Once an open post-publication peer review system is in place, it exists as an option for scientists. Scientists can post their reviews of papers that appeared in the past, if they think their arguments are still noteworthy and the papers sufficiently important.

Papers will still be submitted to journals and reviewed before publication for some time to come. However, every scientist reviewing a paper will ponder a new option:

After writing this review and submitting it to the journal, I could publish my ideas on this paper alongside the paper itself. So should I write this review as an open letter to the community?

The publication of a subset of the carefully written pre-publication reviews that are currently seen only by editors and authors, will greatly add to the depth of the scientific literature. The reviews chosen by their authors for entry into the open system are likely to be of above-average quality. Unlike secret reviews, open reviews derive their power from compelling argument. This provides a strong scientific (rather than political) incentive.

Reviews will be citable publications in their own right. This will motivate reviewers in terms of quality and quantity. Reviews like papers will be citable publications. Initially reviews will not themselves be peer reviewed publications. As the system develops, however, reviews themselves will receive excitatory and inhibitory references from other publications.

Reviewing will gain status, because it is critical to the hierarchical organization of an exploding body of knowledge. Reviewing will be a more important and more formally acknowledged component of scientific activity than it currently is. This is needed in order to evaluate and integrate our exploding body of knowledge.

Highly visible and controversial papers will generate motivation to publish reviews from the beginning. Controversial studies, such as the upcoming paper on “voodoo correlations” in social neuroscience by Vul et al., already motivate many careful responses. Currently, these responses are published by email and on blogs. Some of them will later appear in journals. If a general system for post-publication review existed, these responses would have been published in that system – in addition to being featured on blogs.

It is interesting to note that the Vul et al. paper has been intensely discussed on the web and covered in Nature, the New Scientist, Scientific American, and Newsweek, but will only “appear” – an almost irrelevant concept – in September 2009. This provides an extreme illustration of one of the shortcomings of the current system of scientific publishing: the substantial publication delays. The journal’s making the paper pretty and physically printing a few copies is really an afterthought in this example. Note, however, that the journal was key in selecting the paper for publication after review, thus justifying the attention it got. The initial three reviews will therefore often be solicited by editors in the new system – as they are in the current system.

The number and content of signed reviews written by a given scientist are public pieces of information, thus motivating scientists to contribute. A general open peer review system allows reviewing activity to be analyzed with the same methods used to analyze other publication activity. For anonymous reviews, each scientist’s number and distribution of ratings should also be made publicly available. This information will enable reviewer-specific rating normalizations to be used in computing an overall value index for a scientific paper. A scientist’s reviewing rate and proportion of signed reviews will be public and might, in extreme cases, affect other scientists’ willingness to review his or her papers.


Open post-publication peer review (full argument)

February 12, 2009

(1) Drawbacks of the current system of scientific publishing

Not generally open access

Scientific papers benefit society only to the extent that they are accessible. If the public pays for scientific research it should demand that the results be openly accessible.

Impoverished evaluative signal for choosing papers

The main evaluative signal provided to readers for prioritizing their reading of scientific papers is journal prestige. We are more likely to attend to a paper published in Nature than to a similar paper published in a specialized journal. While journal prestige is clearly correlated with the quality of scientific papers, it provides merely a unidimensional, thus greatly impoverished, evaluative signal. The detailed reviews and multidimensional ratings provided to the journal by the reviewers are kept secret.

Moreover, journal prestige as an evaluative signal is compromised by circularity: Prestige derives from journal impact factors, which in turn depend on citation frequencies. Since a paper published in Nature will be cited more than the same paper published in a specialized journal, prestige – once acquired – creates its own reality. Journal impact factors (especially short-term journal impact factors as are commonly used) therefore give us a quality index distorted to an unknown degree by the self-fulfilling prophesy of citation frequency.

Intransparent and unsatisfactory evaluation process

The current system of publishing is based on an intransparent evaluation process that includes secret reviews visible only to editors and authors. For high-impact publications, the process is additionally compromised by informal comments from influential people. (Such informal additional sources of evaluation may often improve the quality of the decisions made – this is why they are used. However, this practice compromises the transparency and objectivity of the system.)

The selection of a paper for publication is typically based on two to four peer reviews. The quality of an original and challenging scientific paper cannot reliably be assessed by such a small number of reviewers – even if the reviewers are experts and have no conflict of interest (i.e. they are not competitors). In reality, the reviewers who are experts in the subfield of a paper often have some personal stake in the paper’s publication. They may be invested in the theory supported or in another theory. More generally, they may have competitive feelings that compromise their objectivity.

For high-impact publications, this political dynamic is exacerbated because the stakes are higher and more scientists are competing for a smaller stage. To make matters worse, high-impact publications require their reviewers to judge the expected future consequences of a paper, a judgment that requires a necessarily somewhat subjective projection as to where the field will move and how it will be affected by the paper under review. Despite these additional sources of noise in the value signal provided by the reviews, high-impact journals – more than specialized journals – need precise quality assessments if they are to realize their claim of selecting only the very best papers.

Long publication delays

The current system of journal-controlled pre-publication review delays publication of papers by months, often even by more than a year. Scientific papers are the major mode of scientific communication. The months-long delay in the crucial communication line slows the progress of science.

Excessive costs

Scientific publishers are predominantly for-profit organizations that charge a lot of money for their services. We need to assess whether the benefit to science of the services provided justifies the cost of the system.

Private and intransparent control

In the current system, the key function of evaluating and selecting papers is supervised by private publishing companies. Although papers are reviewed by scientists, the selection of reviewers and the decisions about publication are largely in the hands of private publishers. The publishers are professional at what they do, draw from a large amount of experience, and have a reputation to defend. However, profit maximization can be in conflict with what is best for science. The arguments favoring public funding for other aspects of the science (such as the research activity itself) also apply to scientific publishing.

(2) Positive functions of the current system of scientific publishing

Providing an evaluative signal that helps select papers to read

The current system serves the function of administering peer review and providing an evaluative signal, namely journal prestige. This function is critical to scientific progress. However, the arguments above suggest that the current system does not serve this function satisfactorily.

Providing a beautiful layout for papers

Another function provided by the current system is to provide an appealing layout for scientific papers. This function is desirable, but not critical to scientific progress.

(3) Some recent positive developments

PLoS and other open-access journals

The Public Library of Science (PLoS) journals (http://www.plos.org/) and other open-access publications make scientific papers freely accessible. However, they do not address the other drawbacks of the current system.

PLoS ONE (http://www.plosone.org/home.action) takes a further step forward by using pre-publication review only to establish that a paper is “technically sound”, not to assess its importance. This is likely to render peer review more objective. However, it does not help readers choose what to read. PLoS ONE also offers a system for adding comments to papers. This is yet another step forward: toward post-publication peer review.

However, the PLoS journals are classical journals in that quality control relies on pre-publication review, tolerating the evaluation inaccuracies and delays and failing to provide detailed evaluative information, such as public reviews.

The Frontiers journals

The Frontiers journals (http://frontiersin.org/), starting with Frontiers in Neuroscience, combine open access, a new system for constructive and interactive pre-publication peer review, web-based community tools, and post-publication quasi-democratic evaluation of papers. Moreover, Frontiers provides a hierarchy of journals from specialized (e.g. Frontiers in Systems Neuroscience) to general (Frontiers in Neuroscience). The hierarchy may be extended upward in the future.

Importantly, papers are first published in the specialized journals. Based on the additional evaluative information accumulated in the reception of the papers by the community, a subset of projects is selected for wider publication in a higher-tier journal. This has several advantages over conventional approaches: Selection for greater visibility is based on more evidence than available to traditional high-impact publications (which rely only on the few reviews and informal opinions they solicit). The higher-tier thus responds more slowly and ideally more wisely: avoiding to draw attention to findings that do not pass the test of confrontation with a larger group of peer scientists than can be asked to initially review a paper.

The Frontiers system is visionary and represents a substantial step in the right direction. As for the PLoS journals, however, quality control for the lowest tier still relies on pre-publication review, tolerating the evaluation inaccuracies and delays and failing to provide detailed evaluative information, such as public reviews.

Faculty of 1000

Faculty of 1000 (http://www.facultyof1000.com/) provides very brief post-publication recommendations of papers with a simple rating (“Recommended”, “Must-read”, “Exceptional”). The post-publication review idea is a step forward. However, the reviewing is limited to a select group of highly distinguished scientists – a potential source of bias. Evaluations are recommendations – there is no mechanism for negative reviews. Numerical evaluations are unidimensional thus providing only a very limited signal. Finally, the recommendation text is a brief statement, not a detailed review.

ResearchBlogging.org

ResearchBlogging.org collects blog-based responses to peer-reviewed papers. This is a big advance as it allows anyone to participate and provide evaluative information, which can be accessed through the ReserachBlogging.org website. However, as of yet these responses lack numerical ratings that could be automatically analyzed for paper evaluations, blog responses are not digitally signed for author identification, and the responses are not visible when viewing the target paper itself.

(4) The crucial innovation: open post-publication peer review

Beyond open access, which is generally considered desirable, the essential drawbacks of the current system of scientific publishing are all connected to the particular way that peer review is used to evaluate papers. In particular, the current system suffers from a lack of quality and transparency of the peer review process, a lack of availability of evaluative information about papers to the public, and excessive costs incurred by a system, in which private publishers are the administrators of peer review. These problems can all be addressed by open post-publication peer review.

Open: Any scientist can instantly publish a peer review on any published paper. The scientist will submit the review to a public repository. Reviews can include written text, figures, and numerical quality ratings. The repository will link each paper to all its reviews, such that that readers are automatically presented with the evaluative meta-information. In addition, the repository allows anyone to rank papers according to a personal objective function computed on the basis of the public reviews and their numerical quality ratings. Peer review is open in both directions: (1) Any scientist can freely submit a review on any paper. (2) Anyone can freely access any review.

Post-publication: Reviews are submitted after publication, because the paper needs to be publicly accessible in order for any scientist to be able to review it. Post-publication reviews can add evaluative information to papers published in the current system (which have already been secretly reviewed before publication). For example, a highly controversial paper appearing in Science may motivate a number of supportive and critical post-publication reviews. The overall evaluation from these public reviews will affect the attention given to the paper by potential readers. The actual text of the reviews may help readers understand and judge the details of the paper.

Peer review: Like the current system of pre-publication evaluation, the new system relies on peer review. For all of its faults, peer review is the best mechanism available for evaluation of scientific papers. Note however, that public post-publication reviews differ in two crucial respects:

(1) They do not decide about publication – as the papers reviewed are already published.

(2) They are public communications to the community at large, not secret communications to editors and authors.

This makes the peer reviews the equivalent of getting up to comment on a talk presented at a conference. Because these reviews do not decide about publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where uncompelling arguments can prevent publication because editors largely rely on reviewers’ judgments, because there is too little time and no formal mechanism for a judgment of the reviewers’ judgments.

Signed or anonymous: The open peer reviews can be signed or anonymous. In analyzing the review information to rank papers, signed reviews can be given greater weight if there is evidence that they are more reliable.

Digitally authenticated: Reviewers can digitally sign their reviews by public-key cryptography (http://en.wikipedia.org/wiki/Public-key_cryptography). The idea of digitally signed public reviews has been developed here (http://code.google.com/p/gpeerreview/), where further discussion and a basic software tool that implements this function can be found.

Paper selection by arbitrary evaluation functions: The necessary selection of papers for reading can be based on the reviews and their associated numerical judgments. Any reader can define a paper selection function based on content and quality criteria and will automatically be informed about papers best conforming to his or her criteria. The evaluative function could for example, exclude anonymous reviews, exclude certain reviewers, weight evidence for central claims over potential impact of the results etc.

Webportals as entry points to literature: Webportals can define such evaluation functions for subcommunities – for scientists too busy (or too lazy) to define their own. Such webportals would provide a generalized access to the literature that transcends all journals. A webportal can be established cheaply by individuals or larger organizations that share a common set of criteria for paper prioritization.

(5) What will open post-publication peer review achieve?

Open post-publication peer review provides a general, transparent, community-controlled, and publicly available mechanism for review, evaluation, and prioritization of the scientific literature.

Reviews as open letters to the community: Reviews will no longer be secretive communications deciding about publication. They will be open letters to the community with numerical quality ratings that will influence paper visibility on webportals. Open post-publication review will build on the current system by providing a forum for comments and evaluations of papers.

Open posting of private pre-publication reviews: It will allow the original pre-publication reviewers of a paper to make their reviews public, so that their work in reviewing the paper can be of benefit to the readers of the paper and to the community at large.

Community control of the critical function of paper evaluation: Open post-publication peer review allows the scientific community to organize the evaluation of papers, thus taking control of this critical function, which is currently administered by publishers.

Improving evaluation quality: The quality of the evaluative signals will be improved by post-publication review for a number of reasons:

(1) Since reviews are open letters to the community, their power is dependent on how compelling they are to the community. (In the present system, a scientist can reject a paper with no good arguments at all – for a high-impact journal perhaps he’ll say that the paper is good, but claim that the finding not sufficiently surprising.)

(2) Many reviews will be signed, so the reviewer’s reputation is on the line: he or she will want to look smart and reasonable. (Anonymous reviews can be down-weighted in assessment functions if they are thought to be less reliable.)

(3) Important papers will accumulate more reviews over time as the review phase is open ended, thus providing an increasingly reliable evaluative signal.

Eventually, journal prestige will no longer be needed as an evaluative signal.

Merging review and reception: Currently review is a time-limited pre-publication process and reception of a paper by the community occurs later and over a much longer period, providing a very delayed – but ultimately important – evaluative signal: the number of citations a paper receives. Open post-publication peer review will remove the artificial and unnecessary separation of review and reception. It will provide for a single integrated process of open-ended reception and review of each paper by the community.

(6) Transition to a completely open system for scientific publishing

Free publishing: Once open post-publication peer review provides the critical evaluation function, papers themselves will no longer strictly need journals in order to become part of the scientific literature. They can be published like the reviews: as digitally signed documents that are instantly publicly available. Post-publication review will provide evaluative information for any sufficiently important publication.

Instant publishing: With post-publication review in place, there is no strong argument for pre-publication review. Publication on the internet can, thus, be instant and reviews will follow as part of the integrated post-publication process of reception and evaluation.

Peer-to-peer editor choice: After publication, the author asks a senior scientist in his or her field to edit the paper. If the senior scientist accepts, an acknowledgment of his or her role as editor will be added to the paper. The editor’s job is to select two to four reviewers and to email them with the request to publicly review the paper. If they decline, the editor has to find replacements.

However, anyone else is allowed to review the paper as well. In particular, the author may also inform other scientists of the publication and ask them to review the paper. Author- and editor-requested reviews will be marked as such. Requested as well as unrequested reviews can be signed or unsigned.

Editors must not have been at the same institution or on any paper with the authors. Reciprocal or within-clique review editing is monitored and discouraged. Such information, will in any case be publicly available after the fact and may be used to weight the reviews in any automatic quality assessment.

Reviewing-activity statistics: Reviewers must be registered as professional scientists whether they sign a given review or not. The level of reviewing activity of a scientist is a public piece of information. Similarly, it is public information, how many reviews were written anonymously by each scientist. (However, it is not public information, of course, which papers a scientist reviewed anonymously.) In general, each scientist is expected to write about as many reviews as he or she receives.

The process of reception and review: Good papers will accumulate many and positive judgments over time and bubble up in the process – some after 4 reviews and 2 weeks past publication, others after years.

High-prestige publications such as Nature and Science could take note of independently published studies that have turned out to be important. Based on the broader and more reliable evidence of public review, they could decide to showcase, i.e. to republish, these projects – perhaps in a modified version suited for their more general audience. These high-prestige journals would, thus, benefit from the greater quality of the broader and deeper public evaluation of the papers, which would contribute to the quality of their product.

Access to the literature: Webportals will serve as entry points to the literature, analyzing the numerical judgments by different criteria of quality and content (including the use of meta-information about the scientists that submitted the reviews). There will be many competing definitions of quality – a unique one for each webportal or each individual defining his or her own paper evaluation function.

Revisions: If the weight of the criticism in the accumulated reviews and the importance of the paper justify it, the authors have the option to revise their paper. The revision will then be the first thing the reader sees when accessing the paper and the authors’ response to the reviews may render the criticism obsolete. However, the history of revisions of the paper, starting with the originally published version, will always be available – along with the complete history of reviews and author responses.


Open post-publication peer review (brief argument)

February 10, 2009

The first step toward change is to imagine an alternative. This is a synoptic version of a more extended argument presented here.

The current system of scientific publishing

The current system of scientific publishing is not generally open access, provides only journal prestige as an indication of the quality of new papers, relies on a secretive, intransparent, and inherently noisy paper evaluation process, delays paper publication by many months on average, incurs excessive costs, and is privately and intransparently controlled.

The current system serves two positive functions: (1) It administers peer review and provides an evaluative signal that helps readers choose papers, namely journal prestige. This function is critical to scientific progress. However, the drawbacks listed above suggest that the current system does not serve this function satisfactorily. (2) The current system provides an appealing layout for scientific papers. This function is desirable, but not critical to scientific progress.

Recent positive developments in scientific publishing include the Public Library of Science (PLoS) and other open-access journals, the Frontiers journals, and Faculty of 1000, which provides very brief post-publication recommendations of papers with a simple rating (“Recommended”, “Must-read”, “Exceptional”). ResearchBlogging.org collects blog-based responses to peer-reviewed papers – a big advance as it allows anyone to participate and provide evaluative information. However, as of yet these responses lack numerical ratings, are not digitally signed, and not visible when viewing the target paper itself.

While these developments represent important steps in the right direction, they do not go far enough to fully address all problems related to the way the current system utilizes peer review, namely the lack of quality and transparency of the peer review process, the substantial publication delays, the lack of availability of evaluative information about papers to the public, and the excessive costs incurred by a system, in which private publishers are the sole administrators of peer review.

The crucial innovation: open post-publication peer review

These problems can all be addressed by open post-publication peer review.

Open: Any scientist can instantly publish a peer review on any published paper. The scientist will submit the review to a public repository. Reviews can include written text, figures, and numerical quality ratings on multiple scales. The repository will link each paper to all its reviews, such that readers are automatically presented the evaluative meta-information. In addition, the repository allows anyone to rank papers according to a personal objective function computed on the basis of the public reviews and their numerical quality ratings. Peer review is open in both directions: (1) Any scientist can freely submit a review on any paper. (2) Anyone can freely access any review.

Post-publication: Reviews are submitted after publication, because the paper needs to be publicly accessible in order for any scientist to be able to review it. Post-publication reviews can add evaluative information to papers published in the current system (which have already been secretly reviewed before publication). For example, a highly controversial paper appearing in Science may motivate a number of supportive and critical post-publication reviews. The overall evaluation from these public reviews will affect the attention given to the paper by potential readers. The actual text of the reviews may help readers understand and judge the details of the paper.

Peer review: Like the current system of pre-publication evaluation, the new system relies on peer review. For all of its faults, peer review is the best mechanism available for evaluation of scientific papers. Note however, that public post-publication reviews differ in two crucial respects:

(1) They do not decide about publication – as the papers reviewed are already published.

(2) They are public communications to the community at large, not secret communications to editors and authors.

This makes the peer reviews the equivalent of getting up to comment on a talk presented at a conference. Because these reviews do not decide about publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where uncompelling arguments can prevent publication because editors largely rely on reviewers’ judgments, because there is too little time and no formal mechanism for a judgment of the reviewers’ judgments.

Merging review and reception: Currently review is a time-limited pre-publication process and reception of a paper by the community occurs later and over a much longer period, providing a very delayed – but ultimately important – evaluative signal: the number of citations a paper receives. Open post-publication peer review will remove the artificial and unnecessary separation of review and reception. It will provide for a single integrated process of open-ended reception and review of each paper by the community.

Signed or anonymous reviews: The open peer reviews can be signed or anonymous. In analyzing the review information to rank papers, signed reviews can be given greater weight if there is evidence that they are more reliable.

Digitally authenticated: Reviewers can digitally sign their reviews by public-key cryptography (http://en.wikipedia.org/wiki/Public-key_cryptography). The idea of digitally signed public reviews has been developed here (http://code.google.com/p/gpeerreview/), where further discussion and a basic software tool that implements this function can be found.

Reviews as open letters to the community: Reviews will no longer be secretive communications deciding about publication. They will be open letters to the community with numerical quality ratings that will influence paper visibility on webportals (see below). Open post-publication review will build on the current system by providing a forum for comments and evaluations of papers.

Open posting of private pre-publication reviews: The new system will allow the original pre-publication reviewers of a paper to make their reviews public, so that their work in reviewing the paper can be of benefit to the readers of the paper and to the community at large.

Community control of the critical function of paper evaluation: Open post-publication peer review allows the scientific community to organize the evaluation of papers, thus taking control of this critical function, which is currently administered by publishers.

Improving evaluation quality: The quality of the evaluative signals will be improved by post-publication review for a number of reasons:

(1) Since reviews are open letters to the community, their power is dependent on how compelling they are to the community. (In the present system, a scientist can reject a paper with no good arguments at all – for a high-impact journal perhaps he’ll say that the paper is good, but claim that the finding not sufficiently surprising.)

(2) Many reviews will be signed, so the reviewer’s reputation is on the line: he or she will want to look smart and reasonable. (Anonymous reviews can be down-weighted in assessment functions if they are thought to be less reliable.)

(3) Important papers will accumulate more reviews over time as the review phase is open ended, thus providing an increasingly reliable evaluative signal.

Eventually, journal prestige will no longer be needed as an evaluative signal.

Paper selection by arbitrary evaluation functions: The necessary selection of papers for reading can be based on the reviews and their associated numerical judgments. Any reader can define a paper selection function based on content and quality criteria and will automatically be informed about papers best conforming to his or her criteria. The evaluative function could for example, exclude anonymous reviews, exclude certain reviewers, weight evidence for central claims over potential impact of the results etc.

Webportals as entry points to the literature: Webportals will serve as entry points to the literature, analyzing the numerical judgments in the reviews by different criteria of quality and content (including the use of meta-information about the scientists that submitted the reviews). There will be many competing definitions of quality – a unique one for each webportal or each individual defining his or her own paper evaluation function. Webportals can define such evaluation functions for subcommunities – for scientists too busy (or too lazy) to define their own. A webportal can be established cheaply by individuals or groups whose members share a common set of criteria for paper prioritization.

The ultimate goal: free, instant scientific publishing

Free instant publishing: Once open post-publication peer review provides the critical evaluation function, papers themselves will no longer strictly need journals in order to become part of the scientific literature. They can be published like the reviews: as digitally signed documents that are instantly publicly available. Post-publication review will provide evaluative information for any sufficiently important publication. With post-publication review in place, there is no strong argument for pre-publication review. Publication on the internet can, thus, be instant and reviews will follow as part of the integrated post-publication process of reception and evaluation.

Peer-to-peer editor choice: After publication, the author asks a senior scientist in his or her field to edit the paper. If the senior scientist accepts, an acknowledgment of his or her role as editor will be added to the paper. The editor’s job is to select two to four reviewers and to email them with the request to publicly review the paper. If they decline, the editor has to find replacements.

However, anyone else is allowed to review the paper as well. In particular, the author may also inform other scientists of the publication and ask them to review the paper. Author- and editor-requested reviews will be marked as such. Requested as well as unrequested reviews can be signed or unsigned.

Editors must not have been at the same institution or on any paper with the authors. Reciprocal or within-clique review editing is monitored and discouraged. Such information, will in any case be publicly available after the fact and may be used to weight the reviews in any automatic quality assessment.

Revisions: If the weight of the criticism in the accumulated reviews and the importance of the paper justify it, the authors have the option to revise their paper. The revision will then be the first thing the reader sees when accessing the paper and the authors’ response to the reviews may render the criticism obsolete. However, the history of revisions of the paper, starting with the originally published version, will always be available – along with the complete history of reviews and author responses.

Reviewing-activity statistics: Reviewers must be registered as professional scientists whether they sign a given review or not. The level of reviewing activity of a scientist is a public piece of information. Similarly, it is public information, how many reviews were written anonymously by each scientist. (However, it is not public information, of course, which papers a scientist reviewed anonymously.) In general, each scientist is expected to write about as many reviews as he or she receives.

The process of reception and review: Good papers will accumulate many and positive judgments over time and bubble up in the process – some after 4 reviews and 2 weeks past publication, others after years.

High-prestige publications such as Nature and Science could take note of independently published studies that have turned out to be important. Based on the broader and more reliable evidence of public review, they could decide to showcase, i.e. to republish, these projects – perhaps in a modified version suited for their more general audience. These high-prestige journals would, thus, benefit from the greater quality of the broader and deeper public evaluation of the papers, which would contribute to the quality of their product.