(1) Drawbacks of the current system of scientific publishing
Not generally open access
Scientific papers benefit society only to the extent that they are accessible. If the public pays for scientific research it should demand that the results be openly accessible.
Impoverished evaluative signal for choosing papers
The main evaluative signal provided to readers for prioritizing their reading of scientific papers is journal prestige. We are more likely to attend to a paper published in Nature than to a similar paper published in a specialized journal. While journal prestige is clearly correlated with the quality of scientific papers, it provides merely a unidimensional, thus greatly impoverished, evaluative signal. The detailed reviews and multidimensional ratings provided to the journal by the reviewers are kept secret.
Moreover, journal prestige as an evaluative signal is compromised by circularity: Prestige derives from journal impact factors, which in turn depend on citation frequencies. Since a paper published in Nature will be cited more than the same paper published in a specialized journal, prestige – once acquired – creates its own reality. Journal impact factors (especially short-term journal impact factors as are commonly used) therefore give us a quality index distorted to an unknown degree by the self-fulfilling prophesy of citation frequency.
Intransparent and unsatisfactory evaluation process
The current system of publishing is based on an intransparent evaluation process that includes secret reviews visible only to editors and authors. For high-impact publications, the process is additionally compromised by informal comments from influential people. (Such informal additional sources of evaluation may often improve the quality of the decisions made – this is why they are used. However, this practice compromises the transparency and objectivity of the system.)
The selection of a paper for publication is typically based on two to four peer reviews. The quality of an original and challenging scientific paper cannot reliably be assessed by such a small number of reviewers – even if the reviewers are experts and have no conflict of interest (i.e. they are not competitors). In reality, the reviewers who are experts in the subfield of a paper often have some personal stake in the paper’s publication. They may be invested in the theory supported or in another theory. More generally, they may have competitive feelings that compromise their objectivity.
For high-impact publications, this political dynamic is exacerbated because the stakes are higher and more scientists are competing for a smaller stage. To make matters worse, high-impact publications require their reviewers to judge the expected future consequences of a paper, a judgment that requires a necessarily somewhat subjective projection as to where the field will move and how it will be affected by the paper under review. Despite these additional sources of noise in the value signal provided by the reviews, high-impact journals – more than specialized journals – need precise quality assessments if they are to realize their claim of selecting only the very best papers.
Long publication delays
The current system of journal-controlled pre-publication review delays publication of papers by months, often even by more than a year. Scientific papers are the major mode of scientific communication. The months-long delay in the crucial communication line slows the progress of science.
Scientific publishers are predominantly for-profit organizations that charge a lot of money for their services. We need to assess whether the benefit to science of the services provided justifies the cost of the system.
Private and intransparent control
In the current system, the key function of evaluating and selecting papers is supervised by private publishing companies. Although papers are reviewed by scientists, the selection of reviewers and the decisions about publication are largely in the hands of private publishers. The publishers are professional at what they do, draw from a large amount of experience, and have a reputation to defend. However, profit maximization can be in conflict with what is best for science. The arguments favoring public funding for other aspects of the science (such as the research activity itself) also apply to scientific publishing.
(2) Positive functions of the current system of scientific publishing
Providing an evaluative signal that helps select papers to read
The current system serves the function of administering peer review and providing an evaluative signal, namely journal prestige. This function is critical to scientific progress. However, the arguments above suggest that the current system does not serve this function satisfactorily.
Providing a beautiful layout for papers
Another function provided by the current system is to provide an appealing layout for scientific papers. This function is desirable, but not critical to scientific progress.
(3) Some recent positive developments
PLoS and other open-access journals
The Public Library of Science (PLoS) journals (http://www.plos.org/) and other open-access publications make scientific papers freely accessible. However, they do not address the other drawbacks of the current system.
PLoS ONE (http://www.plosone.org/home.action) takes a further step forward by using pre-publication review only to establish that a paper is “technically sound”, not to assess its importance. This is likely to render peer review more objective. However, it does not help readers choose what to read. PLoS ONE also offers a system for adding comments to papers. This is yet another step forward: toward post-publication peer review.
However, the PLoS journals are classical journals in that quality control relies on pre-publication review, tolerating the evaluation inaccuracies and delays and failing to provide detailed evaluative information, such as public reviews.
The Frontiers journals
The Frontiers journals (http://frontiersin.org/), starting with Frontiers in Neuroscience, combine open access, a new system for constructive and interactive pre-publication peer review, web-based community tools, and post-publication quasi-democratic evaluation of papers. Moreover, Frontiers provides a hierarchy of journals from specialized (e.g. Frontiers in Systems Neuroscience) to general (Frontiers in Neuroscience). The hierarchy may be extended upward in the future.
Importantly, papers are first published in the specialized journals. Based on the additional evaluative information accumulated in the reception of the papers by the community, a subset of projects is selected for wider publication in a higher-tier journal. This has several advantages over conventional approaches: Selection for greater visibility is based on more evidence than available to traditional high-impact publications (which rely only on the few reviews and informal opinions they solicit). The higher-tier thus responds more slowly and ideally more wisely: avoiding to draw attention to findings that do not pass the test of confrontation with a larger group of peer scientists than can be asked to initially review a paper.
The Frontiers system is visionary and represents a substantial step in the right direction. As for the PLoS journals, however, quality control for the lowest tier still relies on pre-publication review, tolerating the evaluation inaccuracies and delays and failing to provide detailed evaluative information, such as public reviews.
Faculty of 1000
Faculty of 1000 (http://www.facultyof1000.com/) provides very brief post-publication recommendations of papers with a simple rating (“Recommended”, “Must-read”, “Exceptional”). The post-publication review idea is a step forward. However, the reviewing is limited to a select group of highly distinguished scientists – a potential source of bias. Evaluations are recommendations – there is no mechanism for negative reviews. Numerical evaluations are unidimensional thus providing only a very limited signal. Finally, the recommendation text is a brief statement, not a detailed review.
ResearchBlogging.org collects blog-based responses to peer-reviewed papers. This is a big advance as it allows anyone to participate and provide evaluative information, which can be accessed through the ReserachBlogging.org website. However, as of yet these responses lack numerical ratings that could be automatically analyzed for paper evaluations, blog responses are not digitally signed for author identification, and the responses are not visible when viewing the target paper itself.
(4) The crucial innovation: open post-publication peer review
Beyond open access, which is generally considered desirable, the essential drawbacks of the current system of scientific publishing are all connected to the particular way that peer review is used to evaluate papers. In particular, the current system suffers from a lack of quality and transparency of the peer review process, a lack of availability of evaluative information about papers to the public, and excessive costs incurred by a system, in which private publishers are the administrators of peer review. These problems can all be addressed by open post-publication peer review.
Open: Any scientist can instantly publish a peer review on any published paper. The scientist will submit the review to a public repository. Reviews can include written text, figures, and numerical quality ratings. The repository will link each paper to all its reviews, such that that readers are automatically presented with the evaluative meta-information. In addition, the repository allows anyone to rank papers according to a personal objective function computed on the basis of the public reviews and their numerical quality ratings. Peer review is open in both directions: (1) Any scientist can freely submit a review on any paper. (2) Anyone can freely access any review.
Post-publication: Reviews are submitted after publication, because the paper needs to be publicly accessible in order for any scientist to be able to review it. Post-publication reviews can add evaluative information to papers published in the current system (which have already been secretly reviewed before publication). For example, a highly controversial paper appearing in Science may motivate a number of supportive and critical post-publication reviews. The overall evaluation from these public reviews will affect the attention given to the paper by potential readers. The actual text of the reviews may help readers understand and judge the details of the paper.
Peer review: Like the current system of pre-publication evaluation, the new system relies on peer review. For all of its faults, peer review is the best mechanism available for evaluation of scientific papers. Note however, that public post-publication reviews differ in two crucial respects:
(1) They do not decide about publication – as the papers reviewed are already published.
(2) They are public communications to the community at large, not secret communications to editors and authors.
This makes the peer reviews the equivalent of getting up to comment on a talk presented at a conference. Because these reviews do not decide about publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where uncompelling arguments can prevent publication because editors largely rely on reviewers’ judgments, because there is too little time and no formal mechanism for a judgment of the reviewers’ judgments.
Signed or anonymous: The open peer reviews can be signed or anonymous. In analyzing the review information to rank papers, signed reviews can be given greater weight if there is evidence that they are more reliable.
Digitally authenticated: Reviewers can digitally sign their reviews by public-key cryptography (http://en.wikipedia.org/wiki/Public-key_cryptography). The idea of digitally signed public reviews has been developed here (http://code.google.com/p/gpeerreview/), where further discussion and a basic software tool that implements this function can be found.
Paper selection by arbitrary evaluation functions: The necessary selection of papers for reading can be based on the reviews and their associated numerical judgments. Any reader can define a paper selection function based on content and quality criteria and will automatically be informed about papers best conforming to his or her criteria. The evaluative function could for example, exclude anonymous reviews, exclude certain reviewers, weight evidence for central claims over potential impact of the results etc.
Webportals as entry points to literature: Webportals can define such evaluation functions for subcommunities – for scientists too busy (or too lazy) to define their own. Such webportals would provide a generalized access to the literature that transcends all journals. A webportal can be established cheaply by individuals or larger organizations that share a common set of criteria for paper prioritization.
(5) What will open post-publication peer review achieve?
Open post-publication peer review provides a general, transparent, community-controlled, and publicly available mechanism for review, evaluation, and prioritization of the scientific literature.
Reviews as open letters to the community: Reviews will no longer be secretive communications deciding about publication. They will be open letters to the community with numerical quality ratings that will influence paper visibility on webportals. Open post-publication review will build on the current system by providing a forum for comments and evaluations of papers.
Open posting of private pre-publication reviews: It will allow the original pre-publication reviewers of a paper to make their reviews public, so that their work in reviewing the paper can be of benefit to the readers of the paper and to the community at large.
Community control of the critical function of paper evaluation: Open post-publication peer review allows the scientific community to organize the evaluation of papers, thus taking control of this critical function, which is currently administered by publishers.
Improving evaluation quality: The quality of the evaluative signals will be improved by post-publication review for a number of reasons:
(1) Since reviews are open letters to the community, their power is dependent on how compelling they are to the community. (In the present system, a scientist can reject a paper with no good arguments at all – for a high-impact journal perhaps he’ll say that the paper is good, but claim that the finding not sufficiently surprising.)
(2) Many reviews will be signed, so the reviewer’s reputation is on the line: he or she will want to look smart and reasonable. (Anonymous reviews can be down-weighted in assessment functions if they are thought to be less reliable.)
(3) Important papers will accumulate more reviews over time as the review phase is open ended, thus providing an increasingly reliable evaluative signal.
Eventually, journal prestige will no longer be needed as an evaluative signal.
Merging review and reception: Currently review is a time-limited pre-publication process and reception of a paper by the community occurs later and over a much longer period, providing a very delayed – but ultimately important – evaluative signal: the number of citations a paper receives. Open post-publication peer review will remove the artificial and unnecessary separation of review and reception. It will provide for a single integrated process of open-ended reception and review of each paper by the community.
(6) Transition to a completely open system for scientific publishing
Free publishing: Once open post-publication peer review provides the critical evaluation function, papers themselves will no longer strictly need journals in order to become part of the scientific literature. They can be published like the reviews: as digitally signed documents that are instantly publicly available. Post-publication review will provide evaluative information for any sufficiently important publication.
Instant publishing: With post-publication review in place, there is no strong argument for pre-publication review. Publication on the internet can, thus, be instant and reviews will follow as part of the integrated post-publication process of reception and evaluation.
Peer-to-peer editor choice: After publication, the author asks a senior scientist in his or her field to edit the paper. If the senior scientist accepts, an acknowledgment of his or her role as editor will be added to the paper. The editor’s job is to select two to four reviewers and to email them with the request to publicly review the paper. If they decline, the editor has to find replacements.
However, anyone else is allowed to review the paper as well. In particular, the author may also inform other scientists of the publication and ask them to review the paper. Author- and editor-requested reviews will be marked as such. Requested as well as unrequested reviews can be signed or unsigned.
Editors must not have been at the same institution or on any paper with the authors. Reciprocal or within-clique review editing is monitored and discouraged. Such information, will in any case be publicly available after the fact and may be used to weight the reviews in any automatic quality assessment.
Reviewing-activity statistics: Reviewers must be registered as professional scientists whether they sign a given review or not. The level of reviewing activity of a scientist is a public piece of information. Similarly, it is public information, how many reviews were written anonymously by each scientist. (However, it is not public information, of course, which papers a scientist reviewed anonymously.) In general, each scientist is expected to write about as many reviews as he or she receives.
The process of reception and review: Good papers will accumulate many and positive judgments over time and bubble up in the process – some after 4 reviews and 2 weeks past publication, others after years.
High-prestige publications such as Nature and Science could take note of independently published studies that have turned out to be important. Based on the broader and more reliable evidence of public review, they could decide to showcase, i.e. to republish, these projects – perhaps in a modified version suited for their more general audience. These high-prestige journals would, thus, benefit from the greater quality of the broader and deeper public evaluation of the papers, which would contribute to the quality of their product.
Access to the literature: Webportals will serve as entry points to the literature, analyzing the numerical judgments by different criteria of quality and content (including the use of meta-information about the scientists that submitted the reviews). There will be many competing definitions of quality – a unique one for each webportal or each individual defining his or her own paper evaluation function.
Revisions: If the weight of the criticism in the accumulated reviews and the importance of the paper justify it, the authors have the option to revise their paper. The revision will then be the first thing the reader sees when accessing the paper and the authors’ response to the reviews may render the criticism obsolete. However, the history of revisions of the paper, starting with the originally published version, will always be available – along with the complete history of reviews and author responses.