ePrint archives and open peer review:

lessons from the biomedical journals for the research outputs database of the NHS R&D programme

Douglas Carnall, Research fellow,

Jeremy Wyatt, Senior fellow

Health Knowledge Management Programme,
School of Public Policy, University College London
29 Tavistock Square, London, WC1H 9EZ
Tel 0171 504 4988
Fax 0171 504 4998
 
 

Introduction

Our original brief was to produce a prototype database of NHS R&D results in the form of summaries, which could be used by everyone in South Thames interested in disseminating the results.

When we started the project we were uncertain about the preferences and information seeking behaviour of senior NHS decision makers and we focussed on determining that access. Now that we are clear that internet technologies are the way forward1 we have been able to focus on a single coherent technical solution: namely, a relational database structure with a web interface.

Our original brief demanded that the succession strategy for the system would leave an information system capable of being run by a single graduate administrator with no specialist knowledge. We continue to work to this brief.

Registering studies one thing, publishing results another

Publishing the results of research is a distinct activity from publishing a register of work in progress 2. Once complete, the study itself may be flawed, or a good study may have been inadequately reported. Sorting this out is a higher order intellectual task which requires both specialist knowledge of the field, and considerable skill in written communication. The journals that claim such expertise in publishing do so using a process known as peer review.

Peer review of results prior to publication is a well worn tradition which dates back to the 17th century 3, but it increasingly has a scientific basis for practice. 4 However, all of the studies that are currently extant reflect practice in existing journals. In essence our project is a scientific publishing one, and although there are many similarities with the activity of publishing a journal, but there are also important differences.

Perhaps the most important difference is that our database is new: a tabula rasa that enables us to adopt the best practices of others, and to learn from their mistakes.

Like other publications, our database will serve important institutional needs for its owner. The NHS spends £75m. annually on 'Budget II' research and development projects, and while principal investigators have a ethical and contractual obligation to write up the results of their work for publication in a peer reviewed journal, there is no corresponding compulsion on the editors of the journals to accept it. One important function of the database will be to act as an index of this "grey literature"--work that is hard to access because it exists only in the form of poorly disseminated departmental reports, or is published in a wide range of possibly obscure journals. Because NHS R&D projects are funded on the basis that they will answer questions of specific relevance to the NHS, it should also prove a valuable information resource to professionals practising in the NHS. We have conceived the database as a brand on the desktop, alongside such other sources as Medline, Embase, the Cochrane database of reviews and so on, accessed through the National Electronic Library for Health 5, the NHSNet1, or the internet.

We are keen to base the processes that will develop the database on the best evidence available; a review of the literature on peer review and electronic publishing of medical research results is a good place to start.

Lessons from the journalologists

The scientific activity of examining the processes of journals is rather an new one, and most of the medical journals where the work has been done have been working to a rather different agenda from our own. Owned either by commercial publishing houses or established professional organisations, they compete for readership, influence, and citation ratings. In order to achieve this, the journals' editors are charged with constructing systems that screen, filter, evaluate, and rank reports so that the "best" may be published. Existing journals, for better or worse have existing historical processes which determine their character and content. The research work done has examined elements of their established processes 4, though the advent of electronic publishing has encouraged fresh initiatives 6.

Authors' careers depend on both quantity of publications they can boast on their curriculum vitae and their quality, crudely judged by the number of times the author succeeds in placing articles in journals with a high impact factor.

This method of assessment of scientific performance introduces perverse incentives into the communication process, is partly responsible for the explosion of biomedical information (ref Altman), and a continuum of peccadillos ranging through gift authorship, salami publishing, data torture and bending, to elaborate falsification of studies and patient results. Peer review does not plainly prevent any of these undesirable behaviours, though the editors of journals have an important role in creating the conditions in which the problem is likely to be detected and taken seriously.7 Proposals for a systematic audit of raw data have been suggested but not implemented. (Rennie)

One of the key concerns in any electronic publishing project must be to address the concern that authors may be jeopardising their chances of publication of a full paper by prior publication on the web. Recent debate on the BMJ's correspondence pages suggests that although journals might like to control "ePrints," 8 such a strategy will be impossible to police. 9 In any case, there has never been any doubt that conference proceedings and abstracts published prior to a full paper does not jeopardise this process.

It is perhaps no coincidence that the journal that is most restrictive in policy on pre-publication pre-prints is also the journal with the highest impact factors in the medicine: the New England Journal of Medicine. This may be because it feels that such restrictive strategies are a necessary in defence of its number 1 position, or because it is number 1, its editors reason that its authors will put up with more in exchange for getting into its pages. The NEJM editor who coined the eponymous "ingelfinger rule" was quite clear at its outset that it was being declared for commercial reasons.10 The old argument that "pre-publication release of articles must be avoided to insure that the public is not misled" might be seen by the cynic as post-hoc expediency in the increasingly competitive scramble for media attention.

 

Peer review is also expensive. In a piece of work for the Leicester Primary Communications Research Centre, Gordon estimated that, if each referee took 2 hours to review an article, the cost per article would be £5 in 1978, and double that if salary costs to host institution accounted for. Relman estimated that peer reviewing the New England Journal of Medicine's 2,500 articles cost 2 person years of editorial time and 7 person years of reviewer time each year, pricing the process at $100,000 or $40/article in 1979. Lock's 1984 estimate of the BMJ's costs for processing 4431 articles, using 2 fulltime secretaries, 2 fulltime editors, plus fees for advisers were £211,300, or £48/article. (The latter cost certainly does not include the sub-editing of accepted papers for the page.)

Source: Year Cost/Paper RPI

adjusted to 1998

Gordon 1978 £5 £16.60

Relman 1979 $48 £91.10*

Lock 1984 £48 £88.16

 

*£1 = $1.60

Table 1: Costs of peer review 3

Critics of peer review also argue that it slows the flow of scientific information, is likely to reject papers that are truly original, and exposes authors to the risk that reviewers will steal their ideas.

 

The foregoing arguments are well rehearsed in the literature, but any change has been incremental and slight. Like the researchers who send them material, the journals have concentrated on doing empirical work that tends to examine a part of the system, rather than necessarily taking a step back and considering the objectives of publication with a fresh eye.

The empirical work on peer review has weakened its position as a touchstone of scientific rectitude. Early adopters of the internet (LaPorte) saw a clear opportunity for revolution, in which authors would speak to readers untrammelled by the hidebound conventions of a conservative press, but enthusiasm for this has lessened, as users experience the difficult reality of finding high quality information on the internet. Serious information users need editors, whatever medium they may work in. Still, the internet's undoubted threat to traditional publications has strengthened the hand of those who would reform the system. Ironically, one important innovation: of open peer review, has been adopted for ethical reasons rather than as a consequence of any empirical work 12.

 

Implications for the NHS R&D database

 

The design of the prototype database means that any web user can access the summaries of results. The web is also the means by which the database results are entered. This approach incorporates the presumption that the principal investigators can be relied on to report a summary of their results without editorial intervention. Passwords for write-access to the database are issued by email with a minimal need for human intervention. (See Current Model) Although there is no external editing role, authors are accountable for their own content, and the design of the projects has already undergone peer review during the grant application process. The administrator checks the summaries for face validity and prompt for unentered information, but does not pass editorial judgement on the results.

The authors will be supported with instructions incorporated in the website, and by accessing links to useful information (e.g. a MeSH thesaurus) as they enter their summary.

This system allows ready access to the results without delay, but relies on the users to be capable of critically appraising the material they meet there. Links will be available to sites with information on critical appraisal. The system will where possible link from the summary to a full report held by the authors.

The system has the great virtues of simplicity and transparency, and its function can be explained in a single sentence. ("It is a database linked to the web that lets principal investigators enter their results, and anyone read them.")

 

Problems of the simple model... and some solutions

This early model can be criticised because it does not allow for any processes designed to add quality to reporting of the results. Additionally, it has the potential to cause political embarassment to ministers if research results are entered that contradict the policy of the administration of the day.

Various options are presented that address these difficulties.

Proposal 1 Building in delay to avoid ministerial embarrassment

This presents the existing database structure, but builds in a degree of delay to allow for the Department of Health to prepare responses to research results likely to prove politically difficult. The authors have write-access to a proxy database. After their results are entered, the administrator screens them for likely political sensitivity and refers those which seem potentially problematic to appropriate places. After screening, the results are passed to the live database which is fully searchable in the public domain. This is of course a delay to, rather censorship of, the results.

Proposal 2 Adding electronic peer review

It is the authors who have made the primary observations upon which the work relies, so the proper place of peer review is in ensuring that the conclusions drawn following observation are justified, and to ensure that the entire report communicates in an effective way. This process is partly amenable to a checklisted approach: for example, the CONSORT guidelines have improved the reporting of important methodological points in accounts of randomised controlled trials. 13 If these points were absent in the original submission it would be an expert task to ensure that these points have been fully addressed before publication. The structure of proposal 2 avoids this difficulty by maintaining the original submissions of authors and attaching the comments of readers to them. It envisages a system in which two expert peer reviewers are invited to be amongst the first to read and comment on a summary, doing so with access to a fuller account of the results. All of the comments received are searchable and viewable by all users. This approach has the disadvantage that a summary that received a large number of responses might rapidly become unwieldy to read. Users may yearn for an editor to boil it down to something manageable once more.

This and other options assume that both authors and peer reviewers are happy to work within a system that is open: i.e. the identity of both the author and reviewer are known to each other. Arguments against open peer review centre on the potential for reviewers to make enemies with negative reports, which is of particular concern when the reviewer is junior to the authors (and may later seek a tenured post or grant application from them), or in small fields of enquiry.14 It is also argued that open reports are harder to write and interpret because of the necessity of being constructive and taking emotional as well as intellectual factors into account. In the context of the NHS summaries database this is not necessarily a disadvantage; publication is assumed, so the task of reviewers and principal investigators shifts to improving the work as much as possible before this takes place.

Journals with a large established networks of reviewers may have some difficulty in introducing open review; but if it is declared at the outset it should not prove an obstacle.

 

Proposal 3 Adding readers' comments

This is an electronic representation of the peer review process. It mirrors the process in a medical journal, though constructive criticism and revision rather than rejection will be the general rule. The most demanding part of the process is at the stage at which the suggestions and revisions of peer reviewers must be incorporated by the authors into their manuscript, before resubmitting it. Addressing the multiple and sometimes conflicting demands of reviewers and editors is a challenging task. A revision history may be a useful adjunct to this process so that the authors, editors and users of the system can monitor how things have changed.

It is likely that this system would be challenging to run, and an expert (editor) would be necessary to arbitrate between the competing demands of the parties involved and the needs of the users.

 Choosing between the extended proposals should be based on ethical and scientific considerations. All are straightforward to implement using the current software tools, though increasing complexity will demand more time, and may have implications for the succession costs.

 

1. Burns F. Information for Health. Leeds, NHS Executive, 1998.

2. National Research Register.

3. Lock S. A difficult balance: editorial peer review in medicine. 3rd impression ed. London: BMJ, 1991.

4. Fletcher RHF et al. Evidence for the effectiveness of peer review. Sci Eng Ethics 1997;3:35-50.

5. Gray JM. National Electronic Library Colloquium: NHS Executive, 1998.

6. CM Bingham GH, R Coleman, M Van Der Weyden. The Medical Journal of Australia Internet peer review study. Lancet 1998;352:441-445.

7. Lock S. Fraud and the editor. In: Lock SaW, Frank, editor. Fraud and misconduct in scientific research. London: BMJ Publishing Group, 1993.

8. Delamothe T. Electronic preprints: what should the BMJ do? BMJ 1998;316:794-5.

9. Horton R. Having electronic preprints is logical. BMJ 1998;316:1907.

10. Altman L. The Ingelfinger Rule, embargoes and peer review. Lancet 1996;347:1382-6, 1459-63.

11. Anonymous. UK Retail Price Index, 1998.

12. Smith R. Peer review: reform or revolution? BMJ 1998;315:759-8.

13. Has the randomised controlled trial literature improved after CONSORT?; 1998.

14. Fabiato A. Anonymity of reviewers. Cardiovascular Research 1994;28:1134-9.

 

Thanks to WAME for pulling so much useful information together, and to the BMA library for obtaining the paper.