Improving trial reporting: a Q and A with Isabelle Boutron and introduction to the COBWEB tool

Isabelle Boutron, Professor of Epidemiology at Paris Descartes UniversityDeputy Editor Claire Barnard talks to Isabelle Boutron about her research article published recently in BMC Medicine. Isabelle and her colleagues demonstrate that the COBWEB writing aid tool can improve the completeness of clinical trial reporting. Here, Prof Boutron answers our questions about the study, and discusses the implications of their findings for improving the reporting of research.

Isabelle Boutron is Professor of Epidemiology at Paris Descartes University, a researcher at the INSERM – Sorbonne Paris Cité Epidemiology and Statistics Research Centre (UMR 1153) in the METHODS team, co-convenor of the Cochrane Bias Methods group and deputy director of the French EQUATOR centre.

Isabelle has published more than 100 peer reviewed articles, and her research activities mainly focus on randomized and non-randomized studies evaluating interventions, the methodological issues when evaluating nonpharmacologic treatments, research transparency and the dissemination and interpretation of research results.

What are the problems associated with incomplete reporting of clinical research?

In 2009, Iain Chalmers and Paul Glasziou showed that at least 50% of research reports were not usable because of incomplete reporting. This waste in research has severe consequences for researchers, clinicians, decision makers and patients.

Incomplete reporting precludes advances in scientific knowledge, improvements in the care provided to patients and in public health. In particular, incomplete reporting makes it challenging or sometimes impossible to perform systematic reviews.

A recent study showed that of all trials included in Cochrane systematic reviews, 41% did not provide sufficient reporting to assess their risk of bias. Additionally, Chan et al found incomplete and selective reporting of outcomes for a median of 50% of efficacy and 65% of harm outcomes per trial.

This selective reporting is responsible for biased treatment effect estimates. Also, physicians cannot implement the results of positive trials in clinical practice because interventions are sufficiently described in less than half of published reports.

How do reporting guidelines help to address these problems?

Reporting guidelines such as the CONSORT statement have been developed to improve the completeness of reporting and reduce waste from incomplete reporting. These reporting guidelines aim to guide authors, reviewers and editors to make sure that the minimum set of information is reported.

Nevertheless, the adherence of authors to these guidelines remains low and the quality of reporting insufficient

These guidelines have been widely disseminated and are mainly implemented by editors when manuscripts are submitted or reviewed. Nevertheless, the adherence of authors to these guidelines remains low and the quality of reporting insufficient.

These implementation strategies intervene very late in the life span of a manuscript, after the main author has spent hours refining their manuscript and obtaining agreement for publication from all co-authors. Consequently, authors may overlook making important changes to their manuscript, including verifying that all key information is properly reported. Further, the checklist may not be sufficiently explicit.

For example, the CONSORT item for reporting a rehabilitation programme states to describe: “The interventions for each group with sufficient details to allow replication, including how and when they were actually administered”.

Authors may not understand that they should report all the following information: a) the type of the intervention (name of the program), b) the content of each session, c) if the intervention was delivered to an individual or a group, d) whether the treatment was supervised, e) any instruments used to provide information (computers, tablets, smartphones, other), f) the number and timing of sessions, g) the duration of each session, h) each main component of each session, i) the overall duration of the intervention, j) any procedures for tailoring the interventions to individual participants and k) any co-interventions permitted or restricted.

The CONSORT “Explanation and Elaboration” document provides this information, but authors may be overwhelmed by reading this 28 page document mixing guidance on why each item is important and should be reported, statistics of inadequate reporting, and guidance on how to report it with examples of adequate reporting.

We believe that we need to rethink the implementation of these guidelines and develop new strategies. Particularly, we propose to: 1) intervene earlier, at the stage of the first draft of the manuscript, 2) provide more explicit, succinct guidance along with an example of adequate reporting, and 3) tailor the example according to the context of the trial.

What is the COBWEB tool and how was it developed?

The COBWEB tool is an online writing aid tool dedicated to authors when writing the first draft of their article.

The COBWEB tool is an online writing aid tool dedicated to authors when writing the first draft of their article. The principle is to provide a template shell with each CONSORT item reported with the key elements that need to be presented in the form of several bullet points with an example of adequate reporting.

This template is followed by a large text box where the participants would write their text. After completing all text boxes addressing the CONSORT items, the tool provides a formatted word document.

What did your study do and find?

We evaluated the impact of this tool on the completeness of reporting of two-arm parallel group randomized controlled trials evaluating pharmacologic and non-pharmacologic interventions. We performed a split-manuscript randomized controlled trial (RCT) involving 41 students.

Each participant was randomly allocated to a different real RCT protocol. They had to write six domains of the methods section (‘trial design’, ‘randomization’, ‘blinding’, ‘participants’, ‘interventions’, and ‘outcomes’) of the manuscript for the protocol they received over a four-hour period.

They had access to the tool for three domains randomly selected out of the 6. Our results revealed a large effect of the writing tool on the completeness of reporting.

Could you describe the implications of your study for authors, reviewers and editors?

This study has two main implications. First, it shows that we need to rethink the implementation of reporting guidelines and develop new models. Second, it demonstrates that it is feasible and necessary to evaluate these strategies using high level of evidence with experimental design. In fact, most interventions used to improve the quality of reporting are assessed using before and after studies.

How will you promote use of the tool among researchers writing up clinical trials?

The tool will be freely available with a link from the EQUATOR website.

The tool will be freely available with a link from the EQUATOR website. We will involve different stakeholders such as editors, funders and researchers to promote the use of the tool. Particularly, we will contact academic funders and invite them to send the tool to all investigators that have recently completed their trial. The tool will also be used during the training sessions organized by EQUATOR.

Do you think there will be any barriers to its uptake?

We are not aware of important barriers to its uptake. However, we will conduct qualitative studies to identify possible barriers and refine the tool accordingly.

What further work should be done to build upon the COBWEB tool?

This study is a proof of concept study providing very promising results for authors, reviewers and editors. COBWEB has currently been developed for six different domains: ‘trial design’, ‘randomization’, ‘blinding’, ‘participants’, ‘interventions’, and ‘outcomes’ for the main CONSORT and the extension for Non-Pharmacological treatments.

We are extending the tool for all items of the main CONSORT statement, along with those of other CONSORT statement extensions. Additionally, the principle of the COBWEB tool could apply for reporting guidelines for other types of studies, such as epidemiological studies, diagnostic studies, systematic reviews etc.

This blog was written by Claire Barnard, Deputy Editor, BMC Medicine at BioMed Central. It has been republished with permission. The original blog is available at the BioMed Central website.

Access the COBWEB Tool
The COBWEB tool can be accessed at: http://cochrane.fr/cobweb/
(Please note that when promoted to ‘choose an ID’ you need to create your own ID to log in)

 

EQUATOR Publication School 2017: 19-23 June, Oxford, UK

 

Flyer for UK EQUATOR Centre Publication School

UK EQUATOR Centre’s flagship PUBLICATION SCHOOL
St Catherine’s College, Oxford 19-23 June 2017

Do you want to get published, and be praised for it?

Do you want your organisation to be recognised for its excellent publication record?

Do you want to make a real difference with your research?

Then this is the course for you!  

Learn the secrets of success in planning, writing, publishing, and disseminating your research from the best in the business.

Register here | Earlybird rates until 31 March

“Thank you for the inspiration and energy that you have given me to go back home to my students and colleagues and tell them again and again about EQUATOR”

Many published research articles are reported badly. They often provide insufficient, misleading, or ambiguous information, and so their usefulness is severely compromised.

This lively and practical course is designed to build your skills and confidence to achieve success in planning, writing, publishing, and disseminating your research through traditional journals and other publication and media channels. It will cover:

  • How to write the key sections of your research article, including the methods, analysis and results, introduction, discussion, title, and abstract
  • How to make appropriate and optimal use of reporting guidelines like CONSORT, STROBE, and PRISMA.
  • The importance of writing and publishing protocols using reporting guidelines such as SPIRIT and PRISMA-P.
  • How to target the right journal for your research and navigate different editorial systems
  • How to deal with peer review comments and constructively peer review the work of others
  • How to write for and communicate with healthcare professionals and the public, and make the most of media opportunities

Publication School attendees group photoCOURSE TUTORS INCLUDE
Dr Elizabeth Wager, Publications Consultant, Sideview
Prof Doug Altman, Director, UK EQUATOR Centre
Dr Iveta Simera, Deputy Director, UK EQUATOR Centre
Prof Gary Collins, Deputy Director, Centre for Statistics in Medicine
Domhnall MacAuley, Consultant-Associate Editor with the Canadian Medical Association Journal and PLOS Medicine

The programme also includes contributions from invited guest speakers, including statisticians, journal editors, publishers, and science communicators.

Advertisement for the Publication SchoolFind out about Publication School 2016 by reading a summary of the week, following #EQPubSchool on Twitter, and watching on YouTube Domhnall MacAuley talk about the life of a journal editor.

Please go to our Eventbrite page to register and secure your place on the course.

Earlybird registration: Bookings before 1 April 2017

  • Without accommodation – includes refreshments, lunches and conference dinner: £1,200
  • Including five nights (Sun-Thur) ensuite accommodation with breakfast and evening meal at St Catherine’s College and refreshments, lunches and conference dinner: £1,500

Standard registration: Bookings from 1 April 2017

  • Without accommodation – includes refreshments, lunches and conference dinner: £1,350
  • Including five nights (Sun-Thur) ensuite accommodation with breakfast and evening meal at St Catherine’s College and refreshments, lunches and conference dinner: £1,650

We cannot issue invoices for the course. However, we can issue a receipt for you to claim back from your employer or other funding organisation.

We may be able to offer a LMIC country discount. Please contact Caroline Struthers (caroline.struthers@csm.ox.ac.uk) if you’d like to discuss this, or for any more information about the course.

The Manila Declaration: a significant step in the campaign to make health information available to all

HIFA Health Information for All logoThe Manila Declaration on the Availability and use of Health Research Information in and for Low-and Middle-Income Countries in the Asia Pacific Region was launched on 26 August 2015. It was agreed at the joint conference of APAME (Asia Pacific Association of Medical Editors) and COHRED (Council on Health Research for Development) Global Forum on Research and Innovation in Health (Forum 2015)

The declaration agreement follows discussions on Access to Health Research on the HIFA forums over the past 5 weeks, sponsored by The Lancet, COHRED and APAME.  HIFA members’ ideas and observations have been very important in informing its drafting.

Read more about HIFA’s role in promoting the availability of health research.

It has been published concurrently by journals linked to APAME. It is also listed in the Index Medicus of the South East Asia Region (IMSEAR) and the Western Pacific Region Index Medicus (WPRIM).

Read the Manila declaration in full.

 

 

 

The Eighth International Congress on Peer Review and Biomedical Publication Chicago 10-12 September 2017

The Eighth International Congress on Peer Review and Biomedical Publication will feature three days of original research.  If you haven’t already done so, start your research now!

The Congress will be held September 10-12, 2017, in Chicago, Illinois, USA. As with the previous Congresses, our aim is to improve the quality and credibility of biomedical peer review and publication and to help advance efficiency, effectiveness, and equity in the dissemination of biomedical information throughout the world.

First EQUATOR Publication School a resounding success

Group photo of attendees at the EQUATOR Publication School 6-10 July 2015The first cohort of the EQUATOR Publication School (6-10 July 2015) had goals many researchers can identify with: develop confidence in writing, fill in the gaps in a self-taught writer’s training, polish skills so as best to pass them on to students, and write faster and better. The Publication School week was spent on every aspect of writing and publication, from planning the paper to writing a press release. By the end, participants were saying:

This has been a great week, I’ve really enjoyed it. I’ve been writing papers before now, but I’m even more skilled now. I’m more aware now, more secure about writing the correct things and reporting guidelines.

I’ll absolutely write better articles. I’ll share this with all my colleagues. We do the best we can, but we can do better.

My action goals from the course are to take CONSORT into account when writing, go on a statistics course, get on Twitter, and talk to other medical writers at my firm about using reporting guidelines!

It’s been a great week. I’ve learnt so much!

Intrigued by what you missed? Keep reading for some of the highlights and tips and tricks for writing articles!

Iveta Simera presenting to the group during the publication schoolWrite a paper in a week
Our participants wrote research papers in groups, using a published protocol as inspiration. Each day focused on an aspect of paper writing and common errors to avoid. Liz Wager of Sideview took us through general planning, methods, results, introduction, discussion, and abstract writing. Using this writing order helps to refine a paper’s focus.

Good style is clear thoughts, short words, and short sentences.
Liz Wager

The papers followed Liz’s ‘hourglass’ structure, with a broad focus in the introduction and conclusion, but a narrow focus in the middle with the results. Did you know that you can write an interesting, informative introduction in 150 words? Just decide on your target journal and audience, and tailor the information to their requirements and knowledge, avoiding the unnecessary full literature review.
One participant summarized Liz’s advice perfectly in their action goal:

I’m going to change the way I prepare the introduction, especially the first paragraph. Start with how it impacts the audience! I’m also going to change how I structure the discussion. I usually put the limitations right at the end, but I like this idea of moving them to the second paragraph and ending the paper on a positive note.

Gary Collins presenting to the group during the publication schoolProducing papers fit for a statistics reviewer
A unique aspect of Publication School over other writing courses was the emphasis on correct reporting for medical research. EQUATOR’s Doug Altman, Iveta Simera, and Gary Collins introduced reporting guidelines, which take all the guesswork out of what details to include in a paper.

Reporting guidelines are like shopping lists, they stop us from forgetting important details!
Iveta Simera

Your statistics reviewer, reader, and systematic reviewer will all appreciate your efforts if you combine guidelines with the exhortation to ‘report everything needed for replication’. Many of our participants had experience as systematic reviewers and readers, and it was gratifying when presenters and participants quickly agreed on what makes an excellent statistics write up.

Engaging with editors
Our papers were completed in just three days, but the article journey wasn’t over yet: it was time for publication and dissemination! If you’re aiming for a high-impact journal, remember to register your study and publish a protocol before research begins, and add a results summary to your registration before submitting your article. Our participants were relieved to discover that journals do not consider results summaries on trial registries to be prior publications, and in fact encourage them.
Paper review and acceptance can seem a confusing process from the outside. Domhnall MacAuley from PLoS Medicine and CMAJ gave us an honest account of the life of an editor and just how papers and cover letters are received. He agreed with Liz that articles should be clear, concise, and simply written – editors have dozens of articles to read!

Editors are looking for three things from an article. Is it new? Is it true? Will it make a difference?
Domhnall MacAuley

Jackie Marchington warned us about a different segment of the publishing world that deliberately hides its processes. Predatory journals will take your article in exchange for cash and offer very little peer review or editing. Publishing in these journals is a waste of good research.

Group work during the publication schoolPost-publication: Posts, tweets, and other noise
Publication alone is no longer sufficient for disseminating your work. Dozens of social media platforms and an ever-shortening news cycle make a wall of noise for your work to break through. Jo Silva, the NDORMS Communications officer, introduced the wonders (and dangers) of social media. Thankfully, we can simply choose one or two media platforms, and should consult our communications officer for help. Jo demonstrated that by following a few basic rules, writing with passion and enthusiasm, and including a personal story, you can bring your research to life for the public in a press
release and get your message to more people than you could ever imagine.
Why should we bother with post-publication research dissemination, or even with publication itself? We all agreed that there are moral obligations around research. Public funding brings with it both an obligation to publish and an obligation to make your work useful and accessible for the public. Plain language summaries and effective research dissemination help to meet these obligations.

A week to remember
Publication School offered insights into every step of the publication process, from planning and writing an article to disseminating it to the public. It focused on clear, concise reporting and the use of available guidelines.

I learnt a lot and enjoyed this week so much.

Thank you for the inspiration and energy that you have given me to go back home to my students and colleagues and tell them again and again about EQUATOR and the job you have done and that we can use it.

I had gotten a bit depressed – I’ve been doing this for 10 years since you started – but you’ve given me new energy to start again!

If this sounds like the perfect course for you or your students, keep an eye on EQUATOR’s news for a repeat of the course next year and check out more selected tips and tricks from the week on #EQPubSchool.

EQUATOR makes a splash in Rio

View of  Rio de JaneiroIveta Simera and Caroline Struthers were delighted to represent EQUATOR at the 4th World Conference on Research Integrity in Rio de Janeiro from 31 May-3 June 2015.

It was a fantastic event attended by over 600 delegates from 55 countries.  The main theme for the conference was How to improve the research reward systems to promote responsible research.

Along with Biomed Central’s Daniel Shanahan (Associate Publisher) and Stephanie Harriman (Medical Editor), and Trish Groves (Head of research, BMJ/ Editor in Chief, BMJ Open), Iveta and Caroline presented a “Partner Symposium” on Monday 1 June entitled

EQUATOR Symposium: Making the research publication process more efficient and responsible: practical ways to improve the reliability and usability of published (health) research

The Symposium was well attended and was also well covered on Twitter.

Click on the titles of the presentations to view the slides

Can we trust the medical research literature? Poor reporting and its consequences (PDF) (presenter: Iveta Simera)

Promoting good reporting practice for reliable and useable research papers: EQUATOR Network, reporting guidelines and other initiatives (PDF) (presenter: Caroline Struthers)

What can Biomed Central do to improve published research? (PDF) (Presenters Daniel Shanahan and Stephanie Harriman)

What can a “traditional” journal do to improve published research? (PDF) (Presenter: Trish Groves)

It was also the perfect session at which to offcially launch the brand new journal from BioMed Central Research Integrity and Peer Review with its Editors-in-Chief, Liz Wager, Iveta Simera, Stephanie Harriman and Maria Kowalczuk, and Publisher Daniel Shanahan all in the room!

The work of EQUATOR was mentioned in many of the conference sessions, including the first keynote presentation given by Lex Bouter from VU University Amsterdam entitled What is holding us back in the prevention of questionable research practices?

Liz Wager gave a great plenary talk on Tuesday morning about Why waste in research is an ethical issue (PDF) which also caused quite a stir on Twitter.  Ivan Oransky from Retraction Watch tweeted that the revelations about research waste in her talk were “devastating”.

A video of Liz’s talk should be available from the conference website soon.

Those interested in joining the important movement to REduce Waste And Reward Diligence (REWARD) in research are encouraged to submit an abstract and/or attend the forthcoming REWARD/EQUATOR conference in Edinburgh, UK 28-30 September.

And finally, on the last day of the conference on 3 June, Iveta, Daniel and Trish gave talks in a session on Reporting and publication bias, and how to overcome it.

Iveta introduced the work of the EQUATOR Network in promoting responsible reporting of health research studies, and paid particular attention to our fruitful collaboration with Luis Gabriel Cuervo and the Pan American Health OrganisationView the slides from her talk (PDF).

Daniel Shanahan talked about Biomed Central initiatives to promote complete public records of research studies to overcome publication bias and selective reporting.

Trish talked about Data-sharing and the experience at two open access general medical journals

BioMed Central to publish new journal: Research Integrity and Peer Review

BioMed Central logoOne of the hot topics in science and academic publishing at the moment is peer review, and much work is going into research integrity and promoting good practice from all involved with research. In response, BioMed Central is pleased to announce the launch of a new open access journal Research Integrity and Peer Review, which will act as an academic forum where these discussions can take place.

All aspects of integrity in research publication will be covered by Research Integrity and Peer Review, including peer review, study reporting, and research and publication ethics. Particular consideration will be given to submissions that aim to address current controversies and limitations in the field and offer potential solutions.

The journal will be led by a team of academic experts and members of BioMed Central’s own in-house research integrity team. Editors-in-Chief will include Dr Elizabeth Wager, a publication consultant and visiting professor at the University of Split School of Medicine, Croatia, and Dr Iveta Simera, deputy director of the UK EQUATOR Centre, Centre for Statistics in Medicine, NDORMS at the University of Oxford.

Dr Stephanie Harriman and Dr Maria Kowalczuk, will also act as co-Editors-in-Chief. As BioMed Central’s Medical and Biology Editors, they cover all aspects of policy and ethical issues across BioMed Central’s journals, as well as carrying out research on peer review, and developing guidelines for the Committee On Publication Ethics.

Each co-Editor-in-Chief will head up their own section covering the following topics: Peer Review; Reporting and Research and Publication Ethics. The journal will operate under the open peer review model whereby all of the peer review reports will be published alongside the final article with each reviewer named.

Elizabeth Wager welcomed the new journal when she said: “Improving peer review is important for all researchers, not just journals and publishers; however, until now it has been hard to find a suitable place to publish studies that advance our understanding of this. Similarly, research on publication ethics doesn’t always fit in either ethics or specialist journals. It is therefore exciting to be involved with launching Research Integrity and Peer Review to fill these gaps and provide a resource to support these vital areas of work.”

Iveta Simera highlighted the need for this type of journal and said:  “For many years research reporting has been hiding in a shadow of ‘big’ ethical problems such as scientific fraud or plagiarism. However, the growing need to synthesize and replicate studies has highlighted the crucial importance of accurate, complete and timely reporting of research studies. Despite this knowledge, many deficiencies exist. We have a moral imperative to act decisively and find the best possible ways to prevent these deficiencies on a global scale. The new Research Integrity and Peer Review journal provides an important platform for sharing findings about the effectiveness of interventions and other initiatives aimed at improving the reliability and usability of published literature. It is an extremely exciting and timely new venture.”

Deborah Kahn, Executive Vice President for BioMed Central, explained why they were launching such a journal: “Peer review and reporting are fundamental to the practice of good science, so it is time that research in these areas were treated with the same level of rigour we expect in all other sciences. This journal will push these issues to the fore, to help shape future developments in research integrity, and improve standards across the field. Open Access is particularly important for such a journal, to help disseminate the research as widely as possible, so that others can build on the evidence base being created.”

The journal will be launched at the world conference of 4th World Conference on Research Integrity, which will be held in Rio de Janeiro from May 31 to June 3, 2015.

For more information, please contact Shane Canning, Media Manager, BioMed Central

T: +44 (0)20 3192 2243

M: +44 (0)78 2598 4543

E:  shane.canning@biomedcentral.com

Reporting diagnostic accuracy studies: Evaluating 10 years of STARD

Standards for the Reporting of Diagnostic Accuracy Studies (STARD) logoDaniël A. Korevaar & Jérémie F. Cohen, Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Center, University of Amsterdam, The Netherlands

Over 250 reporting guidelines are currently available in the EQUATOR library. One of the most established of these is the STARD (Standards for the Reporting of Diagnostic Accuracy Studies) statement (1, 2).

STARD, first launched in 2003, was developed by an international group of methodologists, statisticians, reviewers and editors. It aims to improve the reporting of studies that evaluate the diagnostic accuracy of medical tests. Incomplete reporting is problematic as it impedes the identification and reproducibility of the study, as well as a proper appraisal of the internal and external validity. The statement contains a checklist of 25 items that should be reported to make the study report fully informative.

The statement was initially simultaneously published in eight major medical journals. Together, these publications have now been cited more than 2,000 times (Web of Knowledge, March 2015). Over the years, multiple other journals published the STARD statement or the accompanying “Explanation & Elaboration” document, translations are available in seven languages, and at least 30 journals published one or more editorials about STARD, usually to highlight the importance of its use. More than 200 journals explicitly endorse STARD, which means that they require or recommend the use of the checklist in their instructions to authors.

Based on these data, the impact of STARD seems to be impressive. Unfortunately, the success of a reporting guideline can only be evaluated by whether or not it has achieved its main goal: improving the quality of reporting. Based on two of our recent evaluations we can conclude that this goal has been achieved, but only to a moderate extent.

We have performed a systematic review in which we included all evaluations that had assessed adherence of published diagnostic accuracy studies to the STARD checklist (3). We found 16 of these evaluations, together analyzing the reporting of 1,496 diagnostic accuracy study reports in various fields of research. Across these evaluations, the mean number of items reported varied from 9.1 to 14.3 out of 25 STARD items. Not surprisingly, this led all the included studies to conclude that reporting was generally poor, medium, suboptimal, or needed improvement. Six of these evaluations quantitatively compared the reporting of diagnostic accuracy studies that were published post-STARD with those that were published pre-STARD. When we combined them in a meta-analysis, we found a modest but significant increase of 1.4 reported items after STARD’s launch in 2003. However, because most of these evaluations assessed diagnostic accuracy studies published in the first few years after STARD’s launch, it may have been too early to expect large improvements.

We also performed an analysis of more recent studies (4). We assessed the reporting of 112 diagnostic accuracy studies published in twelve high-impact journals in 2012. Expectations were high, as all but one of these journals were STARD endorsers. Unfortunately, on average, the studies reported only 15.3 out of 25 STARD items. Yet this is a significant improvement compared to studies published in the same journals in 2000 and 2004, when the mean number of items reported was 11.9 and 13.6, respectively (5).

We conclude from these two studies that the completeness of reporting of diagnostic accuracy studies has improved in the 10 years after the launch of STARD, but that it remains suboptimal for many articles.

Over the past year, the STARD group, currently consisting of over 85 people, has been working on an update of the checklist. The three main goals of this update are

(1) to facilitate the use of the checklist by rearranging and rephrasing items

(2) to include new information, based on improved understanding of sources of bias and variability and other issues in diagnostic accuracy studies, and

(3) to improve consistency with other reporting guidelines such as CONSORT.

After two web-based surveys and a live two-day meeting in Amsterdam, a pre-final version of STARD 2015 has now been put together and is undergoing piloting. The final checklist is planned to be launched late 2015.

What do we learn from the first ten years of STARD, and how can we make sure that STARD 2015 will further improve reporting quality?

Because of the widespread attention that STARD has received by medical journals, and because of the large amount of STARD adopters among these journals, we expected that major improvements in the reporting of diagnostic accuracy studies would automatically follow. Our evaluations have shown that this is not the case. Other well-known reporting guidelines, such as CONSORT and PRISMA, have faced similar problems (6). Apparently, dissemination of a reporting guideline cannot solely focus on journals. Developers of reporting guidelines should continue to seek innovative ways to reach authors, reviewers and editors, and to convince them of the necessity of complete reporting. EQUATOR plays an important role in this.

For STARD 2015, we aim to publish online training material and workshops, build templates to facilitate writing and peer-reviewing of study reports, and encourage the development of extensions specifically designed for different fields of research. In addition, a close collaboration with the EQUATOR network should make sure that a wide audience will be reached. This collaboration will initially comprise the gradual provision of much more information related to the STARD guideline at the EQUATOR website.

With STARD 2015, we further hope to convince the scientific community of the necessity and simplicity of complete reporting of diagnostic accuracy studies.

Reference List

(1)    Bossuyt PM, Reitsma JB, Bruns DE et al. The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem 2003;49:7-18.

(2)    Bossuyt PM, Reitsma JB, Bruns DE et al. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative. Radiology 2003;226:24-28.

(3)    Korevaar DA, van Enst WA, Spijker R, Bossuyt PM, Hooft L. Reporting quality of diagnostic accuracy studies: a systematic review and meta-analysis of investigations on adherence to STARD. Evid Based Med 2014;19:47-54.

(4)    Korevaar DA, Wang J, van Enst WA et al. Reporting Diagnostic Accuracy Studies: Some Improvements after 10 Years of STARD. Radiology 2015;274:781-789.

(5)    Smidt N, Rutjes AW, van der Windt DA et al. The quality of diagnostic accuracy studies since the STARD statement: has it improved? Neurology 2006;67:792-797.

(6)    Turner L, Shamseer L, Altman DG et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev 2012;11:MR000030.

Fit for purpose? The case for structured reporting of methods and results in research articles

Doug AltmanWe need to consider a new format for publishing research results

EQUATOR’s own Doug Altman in an editorial for the journal Trials highlights the continuing problem that many trial reports don’t contain enough information about the methods to allow others to replicate the study, and the results are not given in enough detail to be included in a systematic review and meta-analysis.  These articles are not fit for purpose and waste the time and resources spent in conducting the research.

Read the article in Trials

 

Reducing publication bias in animal research

Elizabeth Moylan, Senior Editor at BioMed Central, writes about publication bias in animal research, the importance of publishing negative results, and the role of preclinical study registration. But is there a one-size solution that could fit all?

Publication bias occurs when the research that reaches the publication domain is not representative of the research that is done as a whole; typically null or ‘negative’ results are suppressed. Evidence-based medicine pioneer and founder of the Cochrane Collaboration Sir Iain Chalmers has been vocal on the topic since the 1980s and praised the NC3Rs for bringing the issue to the forefront once more.

What drives publication bias?

Emily Sena (University of Edinburgh) opened the workshop and discussed her research with Malcolm MacLeod on initiatives to identify potential sources of bias in animal work. They set up CAMARADES, a Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies.

Intriguingly it’s ‘researcher bias’ which results in most loss of data in not submitting results for publication, rather than ‘Editor bias’ i.e. Editors deeming results aren’t worthy for publication! Indeed there have been journals dedicated to the publication and dissemination of ‘negative’ and non-confirmatory results for over 10 years.

A sad truth was Emily’s acknowledgement that science is set up with perverse incentives that reward scientists for ‘impact’ and ‘productivity’ rather than for the quality of their research or the ability to replicate studies.

Jonathan Kimmelman (McGill University) explained that we are all in the midst of a ‘replication crisis’ in biomedicine with many studies defying replication. Champion of the reproducibility cause, John Ioannidis previously noted that up to 85% of research resources are wasted because of it.

Jonathan advocated preclinical study registration as a key step to reduce publication bias and our moral obligation not to ‘waste’ animals’ lives by not sharing results.

However, the challenge is that a one-size solution may not fit all. Academics may well have different ‘aims’ with regard to the preclinical data that they sit on, compared to a large company. And a large company may have different ‘aims’ again compared to a small one.

Trish Groves from the British Medical Journal gave her perspective on how registration of clinical studies came about in 2005. More recently BMJ co-founded the AllTrials campaign with the simple message ‘all trials registered, all results reported’.

Trish emphasized the ethical rationale for registering a trial. She also pointed out that if it wasn’t for the fact that the ‘big’ journals in medicine had got behind this idea and required mandatory registration, there wouldn’t have been a spike in uptake by researchers. Although there may be reasons why retrospective trial registration is now sometimes justifiable too (Read BioMed Central’s trial registration policy).

Registration is one way to combat publication bias, but is it enough?

We also heard about ways in which journals can drive innovation and help reduce publication bias. Chris Chambers (Cardiff University) talked about a recent initiative from Cortex called Registered Reports where the methods and proposed analyses of a study protocol are pre-registered and reviewed prior to research being conducted.

However, this appears inspired by BioMed Central, who have been pioneering this approach since 2001. The beauty of such initiatives is that protocols can ultimately become the first element in a sequence of ‘threaded’ electronic publications, connecting all digitally-published content relating to the evidence about a particular trial. BioMed Central has been at the forefront of publishers putting this ‘linked data’ approach into practice.

Susanna-Assunta Sansone (Oxford e-Research Centre) talked about the need to motivate researchers to publish data. Her personal opinion was that a ‘carrot and stick’ approach is needed. She described the approach taken by Scientific Data in publishing articles called ‘Data Descriptors’ which as the name suggests are only about data.

They give credit to authors for sharing the data in the form of a citation and comprise two parts: a narrative component (typical sections of an article), and a structural component (which is machine-readable). Susanna is also on the Editorial Board of our GigaScience journal, which since 2012 has pioneered this approach with Data Note articles that similarly use data review, curation, and rich interoperable metadata.

Christophe Bernard, Editor-in-Chief of the Society for Neuroscience’s new journal eNeuro,mentioned the journal’s double-blind peer review process where reviewers don’t know who the authors are and vice versa. Nature also recently announced their intention to offer this peer review process too.

Although double-blind peer review is proclaimed to reduce bias (by forcing reviewers to judge the merits of the manuscript and not be biased by the gender, standing or affiliation of the researcher) there are cons too.

Surely, openness on both sides, where the authors and reviewers are known to each other would be preferable? The medical journals in the BMC series have been operating open peer review for the past 14 years and we’ve found reviewer report quality is higher too.

There are lots of ways to publish data

We heard about ways to find and view data – e.g. PLOS’s collection of negative, null or inconclusive results. This has also been an area BioMed Central has been working hard to counter. Journal of Negative Results in BioMedicine launched in 2002 acknowledging Karl Popper’s realization that science advances through a process of ‘conjectures and refutations’. Similarly, BMC Research Notes launched in 2008 with the aim to free ‘dark data’.

More recently BMC Psychology launched in 2013 with the explicit pledge to publish repeat studies and negative results in a field that has been historically plagued by the under-reporting of both replications and null findings.

We heard how F1000Research had an APC-free period to encourage the submission of negative results and their ‘living figures’ initiative also encourages other researchers to submit data to a published article and evaluate it.

Mark Hahnel from FigShare also talked about new ways to make article content discoverable. The days of the static publication seem to finally be changing, and GigaScience has been using discoverable and citable DOIs to make similarly interactive content discoverable, publishing interactive visualizations and workflows, and downloadable virtual machines.

We also heard from the perspective of a funder (National Institute for Health Research) which provides open access to protocols while trials are underway. The voice of industry was included too with worrying stories about how the selective presentation of data by investigators can lead to poor quality science.

Prospective registration of preclinical studies is needed

Attendees at the workshop could make their views heard on the main purpose of the meeting: whether prospective registration of preclinical studies is necessary to reduce publication bias. We heard from proponents of this view who initially had the audience on their side (a pre-debate vote showed the most support for prospective registration).

However, the opponents of this view produced sufficiently convincing arguments that they swung the vote after the debate. The debate whetted our appetite for breakout sessions to look more closely at the benefits and limitations for the various stakeholders (researchers, journals, funders, industry, institutions) on prospective study registration, publishing models and repositories and lots of feedback were generated.

It seemed that there was a pivotal role for funders to play in making prospective study registration a condition of grant funding and the benefits of this to other stakeholders would encourage cooperation too.

BioMed Central is well aware of challenges, as the publisher of the ISRCTN clinical trials registry. While advocating registration is one thing defining a minimum dataset represents another challenge, however, NC3Rs would be well-placed to take the lead given their role in formulating the ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines. Thank you NC3Rs for a stimulating workshop!

Elizabeth’s article originally appeared on the BioMed Central blog network on 27 February 2015.