Making sense of reporting guidelines

Talks and slides discussing how can editors improve the quality of reporting in their journals and offering practical steps how to make it happen are now available on the EQUATOR website.

The talks were presented in the special session “Reporting guidelines: a tool to increase the quality of health research published in your journal” at the EASE Annual Meeting in Split in June 2014.

 

EASE General Assembly and Conference, Split 2014 – Reporting guidelines session

Outline:

Substantial evidence continues to demonstrate widespread, serious deficiencies in research publications. Journal editors have power to considerably improve the reporting quality of research papers they publish.

This session will summarise major deficiencies in health research publications, give an overview of available reporting guidelines to aid the completeness and transparency of research papers, and discuss practical aspects of implementation of these guidelines in journals.

 

“Reporting guidelines: a tool to increase the quality of health research published in your journal” 

Speakers:

Prof Doug Altman, Director, Centre for Statistics in Medicine, University of Oxford, UK and Chair of the EQUATOR Network Steering Group
Dr Jason Roberts, Managing editor, Headache: the journal of head and face pain, US
Dr Iveta Simera, Head of Programme Development, EQUATOR Network, Centre for Statistics in Medicine, University of Oxford, UK

 

Programme and slides:

Wellcome, introducing the session (Iveta Simera)
Deficiencies limiting reliability and usability of published research papers (PDF) (Doug Altman)
Making sense of reporting guidelines (PDF) (Iveta Simera)
Towards the successful implementation of reporting guidelines at biomedical journals (PDF) (Jason Roberts)

 

Talk abstracts (PDF)
EASE 2014 Conference website

 

Listen to the plenary talk by Prof Doug Altman.

 

Declaration of transparency

A BMJ editorial published by D. Altman and D. Moher, two key leaders of the EQUATOR initiative, proposes that authors of research papers are asked to sign a declaration that their paper is not misleading.

The scientific community and the public at large deserve an accurate and complete record of research. However, there is considerable evidence that the research record is often manipulated for short term gain but at the risk of harm to patients. The medical research community needs to implement changes to ensure that readers obtain the truth about all research, especially reports of randomised trials, which hold a special place in answering what works best for patients.
Journal editors can help by asking authors to sign a declaration of transparency:

Transparency declaration
The lead author* affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
*The manuscript’s guarantor.

The BMJ and BMJ Open are leading the way by implementing this policy immediately. Widespread endorsement and implementation of a publication transparency declaration is one way to help to get the maximum value from medical research. We encourage other journals to support the transparency declaration. Editors, please email Iveta Simera (iveta.simera@csm.ox.ac.uk) when you introduce this policy and we will add you to the list of journals on the EQUATOR website (www.equator-network.org).

Reference: Altman DG, Moher D. Declaration of transparency for each research article. BMJ 2013;347:f4796 [view the free full text]

Supporting journals:

BMJ

BMJ Open

BJOG: An International Journal of Obstetrics and Gynaecology

Canadian Journal of Anesthesia

International Journal of Medical Students

NIHR Journals Library

Revista Española de Salud Pública

 

ANNUAL LECTURE 2014: Presented by Dr Drummond Rennie

The 6th EQUATOR Annual Lecture was presented by Dr Drummond Rennie, Adjunct Professor of Medicine, University of California San Francisco and until recently Deputy Editor (West), JAMA. D Rennie

 

“When Something Gets up Your Nose, Sneeze. How to improve the medical literature while getting educated by your friends.”

Date: Friday 16th May 2014
Venue: Hôpital Européen Georges Pompidou, Paris, France

 

The lecture was given during the meeting on Improving reporting to decrease the waste of research.

 

Lecture outline:
We are moving through another period of radical change in the way scientists communicate with each other, but before we decide that all data and all analyses be instantaneously available, so that there are no journals, merely a continuous, undigested stream, somewhat like PubMed, and before we throw our promotions system under a bus, or make trial reports look like tax returns, we might consider how we got here, and in particular what’s happened over the past 4 decades.
Scientific papers are arguments, sometimes backed by adequate data appropriately analyzed, which the author uses to convince readers to accept his or her conclusions.  In the 1660s, investigators had to build up a system not merely to spread such information, but to agree on what the rules were for what constituted valid scientific facts, and to educate and persuade the wider public on how these facts differed from those derived from opinion or belief. Gradually, these rules for scientific arguments spread to the structure of articles, to help readers to know what to expect, and to make papers both shorter but more useful and complete.
I have been an editor of either the New England Journal of Medicine, or JAMA for 37 years.  In this talk, I shall give a one-sided, solipsistic, autobiographical view of the last four decades, seen from the position of someone at the center of the world of clinical research.  The old problems with communication have not completely disappeared, and many new ones have emerged. Defining these and trying to manage them has produced a revolution, which I wish to describe. Many others played parts in this play, every one of them was more important than my own, but my lengthy involvement in all of them, added to the vantage point of my job, enables me link them usefully.
Within a year of arrival at the NEJM in 1977, I became closely involved in the first cases of gross scientific fraud I, or anyone else, had seen.  I learned that these cases wrecked careers, split institutions, and proved appallingly difficult to handle, given that no definition existed, nor were there any processes to follow, nor systems of adjudication. The injustice, delay and chaos that ensued angered me, and over the next two decades I and others worked hard to bring a workable, standard, routine system into existence and into US law.
However, while these spectacular cases brought humiliated research institutions to their administrative knees, I realized from the start that there was a far bigger ethical and practical issue with the quality of the literature, and that this was unlikely to be due to this newly discovered phenomenon, deliberate malicious research misconduct. What seemed almost laughable to me was the abysmal quality of a proportion of published research. In 1979, I wrote about it as an ethical problem, but it seemed impossible to separate the ethical (badly designed research must be unethical), from statistical and methodological bungling.
Since as an editor I spent so much time with peer review, it was clear that peer review was no bar to eventual publication. However, though it was probable that peer review was not the cause of the problem, perhaps it contained the solution. Impatient with the smug and evidence-free pronouncements of editors, I wanted a large amount of empirical research on the subject fast, so in 1986 I proposed and received JAMA’s backing for a conference to present research into peer review and related journal functions, of which almost none existed. The seven subsequent Peer Review Congresses, from 1989 to 2013, have stimulated a great deal research, but above all, have served as a focus to bring like-minded people together.
They have had several striking consequences. First, investigation of peer review’s mechanics was easy and we soon had data about time, cost, perceived effectiveness and so on, but very few data on the cognitive processes of peer reviewers. Instead, large numbers of papers came in on the quality of the product, namely published articles. From the start, the Congresses were attended by thoughtful scientists like Iain Chalmers and Kay Dickersin who not only were worried about quality, but who had the tools and attracted the individuals to investigate. I discovered from their work that my own ill-focused worries had basis in reality. Research focused on the quality of published articles, rare at the start, blossomed. Whatever specific items were examined, in whatever journals, the message invariably was the same: reporting was frequently inadequate – a depressing message for an editor whose journal was frequently at fault.
The development of the principles of structured reviews by Cindy Mulrow, and the widespread adoption of meta-analysis, were key to studying quality, and to the simultaneous development and expansion of the Cochrane Collaboration. With evidence from repeated demonstration of widespread biases and poor quality it became possible to make recommendations on reporting which, because they were evidence-based, would be credible.  I shall give a personal review of the formation of SORT (and why it failed), Asilomar and then CONSORT (and why it succeeded), as well as the vital role of David Moher and Doug Altman. The ‘sons of CONSORT’, STARD and so on, are proof of its appeal, and EQUATOR is the most striking evidence for this massive revolution.
Finally, I shall give a personal account of registering trials (2005). Once again, the evidence was around, plainly visible, but nothing happened until I, and many others, helped align the political stars, and legal threats made the pharmaceutical companies fall into line.  John Ioannidis and Steve Goodman are among many solving the challenges of vast databases, and of replication.
I was lucky enough to play a role in all these events, inspired by some very clever individuals. I could not predict any of this when I started as an editor. My message is simple: if you find something that gets up your nose: sneeze, and go on sneezing until you’ve cleared the obstruction.

Drummond Rennie Paris 2014

Short biography:
Drummond Rennie, MD, MACP, FRCP. I was educated at Cambridge University and Guy’s Hospital Medical School, London, where I carried out research into cyanotic congenital heart disease and received my Cambridge MD for this. Having been Deputy Editor of the New England Journal of Medicine when at Harvard, I was until recently Deputy Editor (West), JAMA, and an adjunct Professor of Medicine, University of California San Francisco. I am a nephrologist and have conducted numerous investigations in the Alps, the Andes, the Himalayas, the Yukon and Alaska on the pathophysiology of hypoxia. I originated and directed all seven International Congresses on Peer Review in Biomedical Publication, dedicated to the presentation of research into the process of peer review of manuscripts and grants, and the quality of the published literature. I chaired a multi-journal group researching interventions in peer review. I was co-director of the San Francisco Cochrane Center; I have served on the Proposal Review Advisory Team of the National Science Foundation; and have been a peer reviewer for numerous journals. I have been president of the Council of Science Editors and president of the World Association of Medical Editors. I am a founder member of the CONSORT, QUOROM, MOOSE, STARD and STROBE etc initiatives. Until we delivered our report to the US Congress in 1995, I was a member of the Commission on Research Integrity to the Public Health Service. I have formed a group to research the influence of money on the conduct and reporting of clinical research, and have been deeply involved with issues concerning distortion of the scientific record due to money, and due to intimidation of researchers. One of my chief interests is the transparent reporting of clinical research and improving both the clinical researcher’s understanding of influences upon this, and the general reader’s ability to dissect and draw valid conclusions from it.  In this connection, I have co-edited the various articles and books in the series the “Users’ Guides to the Medical Literature” and the “Rational Clinical Examination”. I have also examined the issue of authorship of scientific articles, and the key role authorship plays in the life of scientists, and introduced changes to tighten responsibility on the part of authors. I received the 2009 AAAS Award for Scientific Freedom & Responsibility. I am a member of the Alpine Club, the American Alpine Club and the Tobogganing Club of St Moritz. My life is equally divided between United Airlines and my home in a forest, a mile from Buncom, a ghost town in the mountains of Southern Oregon. For further details, see my obituary: Rennie D. The living, structured auto-obituary. Lancet. 1996;348:875-6.

D. Bishop: Data sharing: not easy but ultimately essential

The importance of making research data available has become very clear. However, the practical aspects of data sharing brings a number of challenges for scientists, including the need to ensure accurate and meaningful description of data, time to prepare data for sharing, accessible storage, etc.

Professor Dorothy Bishop shares her own experience of sharing her research data. She finds that despite making a great effort to be accurate error is inevitable and unavoidable in science, however careful you try to be. The best way to flush out these errors is to make the data publicly available.

 

Data sharing: Exciting but scary

This blog was originally published on Dorothy Bishop’s personal blog and is reposted with permission. Monday, 26 May 2014.

Yesterday I did something I’ve never done before in many years of publishing. When I submitted a revised manuscript of a research report to a journal, I also posted the dataset on the web, together with the script I’d used to extract the summary results. It was exciting. It felt as if I was part of a scientific revolution that has been gathering pace over the past two or three years, which culminated in adoption of a data policy by PLOS journals last February. This specified that authors were required to make the data underlying their scientific findings available publicly immediately upon publication of the article. As it happens, my paper is not submitted to PLOS, and so I’m not obliged to do this, but I wanted to, having considered the pros and cons. My decision was also influenced by the Wellcome Trust, who fund my work and encourage data sharing.

The benefits are potentially huge. People usually think about the value to other researchers, who may be able to extract useful information from your data, and there’s no doubt this is a factor.  Particularly with large datasets, it’s often the case that researchers only use a subset of the data, and so valuable information is squandered and may be lost forever.  More than once I’ve had someone ask me for an old dataset, only to find it is inaccessible, because it was stored on a floppy disk or an ancient, non-networked computer and so is no longer readable.  Even if you think that you’ve extracted all you can from a dataset, it may still be worth preserving for potential inclusion in future meta-analyses.

Another value of open data is less often emphasised: when you share data you are forced to ensure it is accurate and properly documented. I enjoy data analysis, but I’m not naturally well-disciplined about keeping everything tidy and well-organised. I’ve been alarmed on occasion to return to a dataset and find I have no idea what some of the variables are, because I failed to document them properly.  If I know the world at large will see my dataset then I won’t want to be embarrassed by it, and so I will take more care to keep it neat and tidy with everything clearly labelled. This can only be good.

But here’s the scary thing. data sharing exposes researchers to the risk of being found out to be sloppy or inaccurate. To my horror, shortly before I posted my dataset on the internet yesterday I found I’d made a mistake in the calculation of one of my variables. It was a silly error, caused by basing a computation on the wrong column of data. Fortunately, it did not have a serious effect on my paper, though I did have to go through redoing all the tables and making some changes to the text.  But it seemed like pure chance that I picked up on this error – I could very easily have posted the dataset on the internet with the error still there. And it was an error that would have been detected by anyone eagle-eyed enough to look at the numbers carefully.  Needless to say, I’m nervous that there may well be other errors in there that I did not pick up. But at least it’s not as bad as an apocryphal case of a distinguished research group whose dramatic (and published) results arose because someone forgot to designate 9 as a missing value code. When I heard about that I shuddered, as I could see how easily it could happen.

This is why Open Data is both important for science but difficult for scientists. In the past, I’ve found mistakes in my datasets, but this has been a private experience.  To date, as far as I am aware, no serious errors have got into my published papers – though I did have another close shave last year when I found a wrongly-reported set of means at the proofs stage, and there have been a couple of instances where minor errata have had to be published. But the one thing I’ve learned as I wiped the egg off my face is that error is inevitable and unavoidable, however careful you try to be. The best way to flush out these errors is to make the data public. This will inevitably lead to some embarrassment when mistakes are found, but at the end of the day, our goal must be to find out what is the case, rather than to save face.

I’m aware that not everyone agrees with me on this. There are concerns that open data sharing could lead to scientists getting scooped, will take up too much time, and could be used to impose ever more draconian regulation on beleaguered scientists: as DrugMonkey memorably put it:  “Data depository obsession gets us a little closer to home because the psychotics are the Open Access Eleventy waccaloons who, presumably, started out as nice, normal, reasonable scientists.” But I think this misses the point. Drug Monkey seems to think this is all about imposing regulations to prevent fraud and other dubious practices.  I don’t think this is so. The counter-arguments were well articulated in a blogpost by Tal Yarkoni. In brief, it’s about moving to a point where it is accepted practice to make data publicly available, to improve scientific transparency, accuracy and collaboration.

 

Dorothy Bishop, Professor of Developmental Neuropsychology and a Wellcome Principal Research Fellow, Department of Experimental Psychology, University of Oxford.
Dorothy Bishop’s blog page: http://deevybee.blogspot.co.uk/

 

Journals must adopt high methodological standards

A new editorial by Paul Glasziou “The Role of Open Access in Reducing Waste in Medical Research” published in PLoS Medicine discusses an important issue of minimising the avoidable waste in health research in the context of open access journals.

P. Glasziou writes: “Open access will not in itself fix the problems of poor research question selection, poor study design, selective non-publication, or poor or biased reporting, but these can be ameliorated considerably through appropriate editorial policies and peer review processes. Open-access medical journals must maintain particularly high standards for these processes in order to avoid merely increasing access to a biased selection of (often flawed) research.”

There is a lesson for all of us here – all parties involved in medical research (including researchers, reviewers, journals, funders, research and other professional organisations) must contribute to the improvement of all stages of the research process from its planning, design, conduct to usable publication (for specific recommendations see http://researchwaste.net/).

Videos from Lancet/NIHR “Waste in Research” symposium available online

Videos from Lancet/NIHR “Waste in Research” symposium are now available.

Of particular relevance to EQUATOR is a presentation and paper by Paul Glasziou and colleagues on “Reducing waste from incomplete or unusable reports of biomedical research” (watch talk 6)

See also our related blog: Can librarians contribute to increasing value and reducing waste in medical research?

 

Wiley’s New Publication Ethics Guidelines

Wiley has published a newly revised and updated edition of its “Best Practice Guidelines on Publishing Ethics: A Publisher’s Perspective”.

The guidelines provide practical advice to editors on the major ethical principles of academic publishing including the section on research reporting and reporting guidelines (4.6).

Access the full text from the Wiley website