Planning a systematic review? Think protocols

Larissa Shamseer, a PhD candidate in the Knowledge Synthesis Group at the University of Ottawa David Moher, co-Editor-in-Chief of Systematic Reviews

 

 

 

 

 

 

While much of the focus on reducing waste in research has been on primary studies, in particular clinical trials, systematic reviews and meta-analyses can compound the problem. Here, Larissa Shamseer, a PhD candidate in the Knowledge Synthesis Group at the University of Ottawa, and David Moher, co-Editor-in-Chief of Systematic Reviews, discuss the need to apply the same standards to systematic reviews as are seen in clinical trials, including registration and publication of protocols.

So, what is the problem?

For those of us paying attention, there has been a slow, yet steady buzz for many years about the waste incurred in biomedical research. Eighty-five percent of research investment is estimated to be lost because the wrong questions are asked, inappropriate research designs/methods are used, poor regulations are in place, or researchers incompletely/inadequately report their research, or fail to publish it at all.

To date, much of the focus on waste has been around primary studies, namely clinical trials; however, most people likely don’t realize that systematic reviews, which are essential in summarizing and synthesizing primary research, not only magnify waste in primary studies but also contribute to it much in the same way… although to a lesser extent.

Why should you care?

We are taught to think that systematic reviews provide the best evidence for answering a health research question, particularly those about therapy effectiveness. They are lauded for their rigorous, methodical approach, an essential component of which being that they are conducted according to pre-specified methods and analyses. However, in reality, this doesn’t seem to be happening. Indeed, a landmark 2007 study determined that the majority of systematic reviews did not mention working from a protocol at all. Yet we still have (blind?) faith that they are all that we want them to be.

Systematic review protocol diagramWithout review protocols, how can we be assured that decisions made during the research process aren’t arbitrary, or that the decision to include/exclude studies/data in a review isn’t made in light of knowledge about individual study findings?

Emerging evidence suggests that when protocols are available, for example in Cochrane reviews, at least 22% have discrepant outcomes from their corresponding completed reviews, some related to the significance, size, and direction of outcome effects (much of the same story as in the primary literature). Furthermore, while some duplication is good (i.e. for validation), how can we ensure that efforts are not being simply wasted because disconnected groups of systematic reviewers are unaware of what others are doing? With all of these questions, what are we to make of reviews that don’t refer to any kind of protocol, and may not have one?

What is the solution?

If you’re having an ounce of déjà vu right now, you were probably around 30 years ago when the same concerns were being raised about clinical trials: not all protocols were reported or available, not all trials contained complete information about methods and findings, and some trials were not published at all, leaving questions about the integrity of the research that followed. The solutions that followed went something like this:

Solutions to improve incomplete and biased reporting of systematic reviews have followed a similar trajectory. In 1999, the quality of reporting of meta-analyses (QUOROM) guideline was published; this was succeeded by the Preferred Reporting Items for Systematic reviews and Meta-analyses (PRISMA) guideline in 2009. The first international registry for systematic reviews (PROSPERO) was launched in 2011. In 2012, the first journal dedicated to exclusively publishing systematic review products (including protocols), BioMed Central’s Systematic Reviews, was started (other journals have begun publishing review protocols as well). Now, in the first week of 2015, a PRISMA guidance for protocols (PRISMA-P) has been published in Systematic Reviews and The BMJ.

 

PROSPERO website home page as at August 2018

The PROSPERO webpage as in August 2018

What can you do?

Given their costly time and resource requirements, we simply cannot afford wasted efforts when it comes to systematic reviews. Accompanying the PRISMA-P guideline is a specific set of proposed actions (and benefits) for stakeholders involved in the systematic review process (yes, that means you!). Now that we have helped you take stock of the problem, we challenge you to do your part to help stop some of the waste.

Check out the latest solution to improve the reporting of systematic reviews, PRISMA-P.

View PRISMA-P record in the EQUATOR Library.

Originally published as a BioMed Central blog.

Introducing the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis Initiative: The TRIPOD Statement

TRIPOD Statement logo

Gary Collins, Associate Professor and Deputy Director of the Centre for Statistics in Medicine, University of Oxford introduces TRIPOD.

Decisions based on clinical predictions are routinely made throughout medicine and at all stages in pathways of health care.  For example, in the diagnostic setting, predictions are made as to whether a particular disease is present informing the referral for further testing, initiate treatment or reassure patients that a serious cause for their symptoms is unlikely.  In the prognostic setting, predictions can be used for planning lifestyle or therapeutic decisions based on the risk of developing a particular outcome over a time period.  Yet, making a diagnostic or prognostic prediction is challenging and rarely based on a single risk factor, test result or symptom.

The multifactorial nature of making a clinical prediction makes it difficult for doctors to simultaneously and subjectively weigh multiple risk factors to produce a reliable and accurate estimate of risk. Furthermore, seeing relatively few cases and cognitive biases, it is unsurprising that numerous studies have shown that doctors are generally poor prognosticators.

However, increasingly doctors, often based on recommendations in national clinical guidelines, are using multivariable prediction models to support and guide the clinical decision-making process.  A clinical prediction model is a mathematical equation that relates multiple predictors for an individual to the probability (or risk) that a particular disease or condition is present or will occur in the future. Well-known prediction models include the Framingham Risk score, Apgar Score, Ottawa Ankle Rules, EuroSCORE, Nottingham Prognostic Index and the Simplified Acute Physiology Score (SAPS).

Introducing The TRIPOD statement

The last 10-15 years have seen an explosion in the number of published articles describing the development (and occasionally the validation) of clinical prediction models. Particular clinical areas have seen considerable numbers of models being developed for the same outcome (e.g. diabetes, TBI, prostate cancer). Developing a prediction model can be very easy; using existing data (collected for a different purpose) and a statistical package. Clearly, this is an oversimplification, but cynically, many prediction models are developed, with no compelling clinical need and thus with little intention of ever being used, and are merely an easy publication to add to one’s curriculum vitae.  In areas where there have been many models being developed, deciding which one to use is difficult. Which is made particularly more difficult, as many are developed under the guise of being a new discovery with existing models often ignored and rarely compared against.

To evaluate the methodological conduct and reporting we conducted a number of systematic reviews describing the development or validation of multivariable prediction models across different medical areas and found that reporting to be particularly poor.  What was surprising when we analysed the results from these and other systematic reviews was the clear lack of crucial information being presented. Without full and transparent reporting, evaluating, even implementing and synthesizing the results (see CHARMS Checklist) from such studies is problematic.

At the most critical level, authors were developing models yet failing to provide information on the actual model so that other researchers were unable to use the model to test the model or use on their patients – which is clearly ludicrous, to go to the extent to develop a model yet fail to tell readers what the model is.  Published articles with incomplete reporting are unusable – as noted in the recent Lancet series on reducing waste in research.

It was clear to us that authors, reviewers, editors and readers needed clear guidance on what issues should be included when describing the development and validation of a prediction model, which led us to develop the TRIPOD Statement.

The TRIPOD Statement is an annotated checklist (PDF) of items that arose from a systematic review of the literature, which was further reduced and refined following discussions during a 3-day consensus meeting in 2011 with international experts in prediction modelling (statisticians, epidemiologists, clinicians and journal editors). The resulting checklist comprises 22 items that were regarded essential for good reporting of studies developing or validating multivariable prediction models. Authors of published reports of studies describing the development, validation or updating of a prediction model should ensure that all items in the checklist are mentioned somewhere in the article.

The recommendations within TRIPOD are guidelines only for reporting research and do not prescribe how to develop or validate a prediction model. Furthermore, the checklist is not a quality assessment tool to gauge the quality of a multivariable prediction model, for which the upcoming PROBAST risk of bias tool will be available.

Many prediction models are (unfortunately) developed without involving either a statistician or epidemiologist, and therefore providing guidance that can be readily understood by study investigators of varying levels of technical (e.g. methodological) experience was of paramount importance.  Furthermore, to motivate authors (also peer reviewers and editors) to use the checklist, we also aimed to keep the checklist as brief as possible, whilst ensuring all the key details are clearly reported.

Whilst the TRIPOD Statement is foremost guidance for reporting, we produced an accompanying and extensive 22,000-word Explanation & Elaboration article discussing not only rationale and example of good reporting but also methodological aspects for investigators to consider when developing, validating or updating a prediction model.  Yet, one of the challenges we faced in providing such information on suitable methodology was to provide a balanced account of competing approaches, particularly as there is no clear consensus on many aspects in developing or validating a prediction model, and where the methodology in this field is continually evolving. Conscious efforts were made to be neutral by describing both advantages and disadvantages of various methods, cautioning against methodologically weak approaches without dictating how the study investigators should conduct their study.

To increase the visibility of the TRIPOD Statement we are co-publishing the article simultaneously in 11 leading general medical and speciality journals including the Annals of Internal Medicine, BJOG, BMC Medicine, British Journal of Cancer, British Journal of Surgery, British Medical Journal, Circulation, Diabetic Medicine, European Journal of Clinical Investigation, European Urology and the Journal of Clinical Epidemiology. We welcome other journals in endorsing TRIPOD, by including prediction model studies as a distinct study type, include TRIPOD in their instructions for authors and require authors to complete and submit a TRIPOD Checklist in their submission.

We have also developed a website (www.tripod-statement.org) where additional information, references, and PDF and Word versions of the checklist are available for download.  Journals and organisations endorsing TRIPOD will be listed on the TRIPOD website.  For announcements on TRIPOD related information, follow us on Twitter @TRIPODStatement.

The Centre for Statistics in Medicine (University of Oxford) is also home of the EQUATOR Network and will list TRIPOD among their list of key reporting guidelines, and it will be announced via the EQUATOR Newsletter, Twitter (@EQUATORNetwork) and LinkedIn.

We believe that if authors adhere to the TRIPOD Statement, that readers and potential users (of the model) will have a transparent and full account of all aspects of the prediction model study enabling them to critique and fully judge the potential merits of the model.

Gary Collins on behalf of the TRIPOD steering group (Gary Collins and Doug Altman, University of Oxford, UK; Karel Moons and Hans Reitsma, UMC Utrecht, The Netherlands)

 

The first EQUATOR Reporting Guideline Development Meeting, 18-20 November 2014, Oxford

Attendees at the first EQUATOR Reporting Guideline Development MeetingMore than 20 contributors with a wealth of expertise, experience, and passionate commitment to the cause gathered in Oxford for the first EQUATOR Reporting Guideline Development Working Meeting. We discussed how the EQUATOR Network can move forward and take practical steps to engage with researchers, journal editors, peer reviewers, ethics committees, and consumers to help people develop high-quality Reporting Guidelines and apply them effectively to increase the quality, transparency, and usefulness of the health research literature.

We will share ideas and suggestions from the meeting for comments shortly. Please contribute to the ongoing discussions as they develop – we would really welcome your views.

Journals and industry collaborate on new authorship framework to improve transparency of industry-sponsored research

Medical Publishing Insights and Practices Initiative (MPIP) logoThe Medical Publishing Insights and Practices Initiative (MPIP) has recently published the Five-step Authorship Framework to Improve Transparency in Disclosing Contributors to Industry-sponsored Trial Publications in BMC Medicine. MPIP, in collaboration with academic researchers, conducted a novel qualitative attitude study to identify ambiguities encountered in industry-sponsored trials that are not well addressed by current guidelines.

This research, which included responses from approximately 500 clinical investigators, journal editors, publication professionals and medical writers, led to development of a standardized approach that can be used prospectively to facilitate more transparent and consistent authorship decision-making. The resulting framework reflects close collaboration between journal editors and industry representatives obtained during MPIP roundtables in the USA and UK as well as through broader consultation with those involved in medical publications.

MPIP Five-step Authorship Framework
1. Identify a representative group to establish authorship criteria early in the trial
2. Reach consensus among all trial contributors concerning these criteria
3. Document trial contributions
4. Objectively determine who should be invited to participate in the manuscript development process
5. Ensure authors meet all authorship criteria

David Moher, Associate Professor at the University of Ottawa and member of the CONSORT Group Executive committee, wrote a commentary that accompanied the Five-step Authorship Framework publication. He summed up the value of the framework as, “What is positive about this research is that the proposed attribution process for authorship is brief, and not complex; it’s only five-steps. It is meant to augment the guidance provided by the International Committee of Medical Journal Editors.” He also added, “Disclosing authorship transparently is important for any manuscript being submitted to a biomedical journal for publication consideration. The responsibilities associated with authorship must be taken seriously. This might help increase value and reduce avoidable waste of biomedical research.”

Despite the availability of various authorship guidelines, it can be challenging to bridge the gap between guidelines and practical application when determining authorship. MPIP research found that low awareness, variable interpretation, and inconsistent application of authorship guidelines can lead to confusion and lack of transparency when recognizing those who merit authorship.

The Five-step Authorship Framework helps close this gap while aligning with the latest ICMJE guidance. It can be applied more broadly to all clinical trial publications, not just industry-sponsored research. In doing so, the clinical, pharma and publishing community has a unique opportunity to increase the value of biomedical research.

Links to Five-step Authorship Framework publications can be found here:

Article by Ana Marušić et al available on the BMC Medicine journal website:
http://www.biomedcentral.com/1741-7015/12/197

Commentary by David Moher available on the BMC Medicine journal website:
http://www.biomedcentral.com/1741-7015/12/214/abstract

MPIP is a cross-disciplinary collaboration between the pharmaceutical industry and the International Society for Medical Publication Professionals (ISMPP) that seeks to continue to elevate trust, transparency and integrity in publishing industry-sponsored studies. For more information about MPIP, visit www.mpip-initiative.org.

Ginny Barbour: I’ve Got a (lot of) Little (check)lists

Cartoon image of a checklist

Image credit: Oliver Tacke, Flickr.

PLOS Medicine Editorial Director, Virginia Barbour, reflects on the publication of the CONSORT and PRISMA guidelines and reminds us of the importance of checklists to medical publishing. This blog was originally published as one of eight in PLOS Medicine’s 10th Anniversary blog series on the most interesting and influential articles of the last ten years (read the original blog.)

Gilbert and Sullivan’s Lord High Executioner has, sadly, given lists a bad name. Rather than tools of revenge, lists in healthcare, however, have the power to do much good. Atul Gawande’s book on lists has explained why they should be core to medical practice. I’d argue that in medical publishing too they are critical. When Robert Boyle, one of the founders of the UK’s Royal Society, wrote the Spring of the Air he was probably the first to write in such a way that allowed other men (it was only men then of course) to repeat and test his findings.  In this way, he was, in turn, one of the first to legitimize research by making it reproducible.

More than 300 years later we have fully reproducible literature with everything fully reported, right? Wrong. There is a current crisis of confidence in research, with increasing and appropriate concern that many results, especially the most dramatic, often cannot be trusted. Contributing fundamentally (but not exclusively, obviously) to this problem is that whole swathes of the medical and scientific literature are not described in sufficient detail that anyone else can even test. In medicine this crisis literally is life-threatening; patients given treatments as a result of inadequately described studies may at best be treated sub-optimally, at worst harmed or killed.

However, in one important corner of the research endeavour, a group of individuals have, for many years now, been making a determined effort to change this poor reporting and PLOS Medicine is proud to have played its part in it. The CONSORT reporting guidelines for clinical trials and the PRISMA guidelines for Systematic Reviews and Meta-Analyses are, I’d argue two of the most important papers the journal has ever published. The premise behind both of these documents, and the checklists in them and the many other guidelines that we and other PLOS journals have published is very simple: tell us what you did so others can test it.

Is there a downside to this reporting? Yes, perhaps as a weary researcher, having another guideline to abide by and a checklist to fill can seem like one too many hurdles to jump over. And what if guidelines were used in such a way as to judge significance—would they be used to discriminate against them? Are such worries legitimate? I’d say not. And furthermore, in several ways, beyond the simple message of clarity of reporting they illustrate values that are core to PLOS.

Reporting guidelines above all allow us to judge what a paper reports, and what it does not (hint, the clue is in the name). Was this trial randomized or not; how was the search strategy done; these are the most basic of questions. The secondary questions of whether these results are ones that will change your medical practice are not for guidelines to answer. My favourite analogy of a guideline is that using one is akin to turning on a light in a room. It tells you what the room looks like; it doesn’t (unless you live in some Hogwartian dream word) clean the room for you. More prosaically, reporting guidelines are a tool, not a panacea; much like Open Access.

The two guidelines here are themselves revisions on earlier guidelines and this also illustrates an important point i.e. that such guidelines must evolve, that every iteration with more accumulated input to it will improve it.  Arguably, publishing these papers in an Open Access journal such as PLOS Medicine will accelerate this process by enabling visibility, dissemination, reuse and thus ultimately improvement—again something very close to PLOS’ heart.

And the final way in which these guidelines indirectly reflect PLOS’ values are in the way that they are the product of many people’s time, intellect and energy; none of them would have come about unless, as one of the leaders of the EQUATOR initiative once said, when drawing these guidelines up everyone was able to ”leave their egos at the door”.

So, dear researcher, when you feel wearied by a checklist, remember the philosophy, energy and lack of egos that went into making them.  If Robert Boyle and colleagues could understand the importance of good reporting 300 years ago, surely it’s time now to make it a cornerstone of the scientific and medical literature.

You can read the full guidelines here:

Schulz KF, Altman DG, Moher D,  for the CONSORT Group (2010) CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials. PLOS Medicine 2010

Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLOS Medicine 2009

 

 

NEW BOOK from EQUATOR: Guidelines for reporting health research: a user’s manual

Front cover of the Guidelines for Reporting Health Research manual

The EQUATOR Network is very proud to announce the publication of a new book Guidelines for Reporting Health Research: a user’s manual

Written by the authors of health research reporting guidelines, in association with the EQUATOR Network, this is a one-stop-shop to help choose and correctly apply the appropriate guidelines when reporting health research to ensure clear, transparent, and useful reports.

It is an invaluable resource for researchers in their role as authors, and also an important reference for editors and peer reviewers.

It includes an introduction to reporting guidelines, an overview of the importance of transparent reporting, the characteristics of good guidelines, and how to use reporting guidelines effectively in reporting health research.

This hands-on manual also describes over a dozen internationally recognized published guidelines such as CONSORT, STROBE, PRISMA and STARD in a clear and easy-to-understand format.

Introduction to medical research: essential skills 2: Research design and protocol

4 October 2014:  Module 2: Research design and protocol

Speakers:
Dr Iveta Simera, Head of Programme Development, EQUATOR Network
Professor Doug Altman, Director, Centre for Statistics in Medicine and the EQUATOR Network
Dr Gary Collins, Associate Professor and Deputy Director, Centre for Statistics in Medicine

Speaker Description Resources to download
Welcome and recap of last session Iveta  The research question and PICO Developing a research question (pdf)
Introduction to study design Doug Presentation and discussion Introduction to study design (pdf)
Critical appraisal of study design Doug Practical on DVT paper in Lancet
Developing a protocol Gary Presentation and discussion Developing a protocol (pdf)
Measurement issues and avoiding the pitfalls Doug The O in PICO The importance of measurement (pdf)


* Special note
:To get involved in the development of Cochrane systematic reviews please see more information on the Cochrane website (Getting involved).For those looking for an easy way to get involved with Cochrane, join the growing community of Embase screening volunteers and get instant hands-on experience of a real Cochrane task which really needs doing.  There is no minimum time commitment.  Read more by clicking the link below, and sign up today!

Become an EMBASE screener – Cochrane’s innovative EMBASE project is open for all budding volunteers!

Introduction to medical research: essential skills 1: research planning – before you start your research

13 September 2014:  Research Planning: before your start your research

Speakers:
Professor Chris Pugh, Director, Oxford University Clinical Graduate School
Dr. Sally Hopewell, Senior Research Fellow, Centre for Statistics in Medicine
Dr. Iveta Simera, Head of Programme Development, EQUATOR Network
Mrs Shona Kirtley, Senior Research Information Specialist, EQUATOR Network
Donald M. Mackay, Head of Health Care Libraries, Bodleian Health Care Libraries

Speaker Description Resources to download
Welcome and introduction Chris
Overview of research process Iveta Outline of whole course; research flowchart; elements of a good research project Powerpoint presentation (pdf); research flowchart
Overview of ethical and governance issues in clinical research Iveta Ethical issues and approval; patient involvement; good clinical practice; authorship/agreements; resources Research ethics application flowchartDeclaration of Helsinki
Introduction to systematic reviews Sally Introduction to systematic review methodology and meta-analysis; Cochrane* Systematic reviews Part 1;Systematic reviews Part 2 (pdfs)
Formulating the research question Sally Group work and discussion Practical: flying socks PICOS Exercise Flying Socks;
Literature searching Shona Identifying sources to search; developing search strategies; controlled vocabularies; free-text; syntax; search filters; searching multiple databases Powerpoint presentation (pdf)
Online resource access Donald NHS (Athens) and University resources; support from Outreach librarian teams locally; HDAS intro/demo Powerpoint presentation (pdf)

 

* Special note:To get involved in the development of Cochrane systematic reviews please see more information on the Cochrane website (Getting involved).

For those looking for an easy way to get involved with Cochrane, join the growing community of Embase screening volunteers and get instant hands-on experience of a real Cochrane task which really needs doing.  There is no minimum time commitment.  Read more by clicking the link below, and sign up today!

Become an EMBASE screener – Cochrane’s innovative EMBASE project is open for all budding volunteers!

 

The CROWN Initiative: Journal editors lead the way in the effort to standardise outcome measures in women’s health research

CROWN Initiative logoFifty-six of the top journals in the field of obstetrics and gynaecology are leading the CROWN Initiative – an international effort to encourage researchers to collect and report core outcome sets in studies of key conditions in women’s health. A recent systematic review (Meher 2014) found that 103 randomised trials of interventions to prevent pre-term birth reported no fewer than 72 different outcomes. This variability makes it almost impossible to synthesise the evidence from trials on the same topic, and highlights the urgent need for the development of core outcome measures not only in this field, but in others. The COMET Initiative has been promoting the development of core outcome measures since its launch in January 2010 and is a key supporter of the CROWN Initiative.

1. Meher S, Alfirevic Z. Choice of primary outcomes in randomised trials and systematic reviews evaluating interventions for preterm birth prevention: a systematic review. BJOG 2014; DOI: 10.1111/1471-0528.12593

Exciting new collaboration between EQUATOR and the Global Health Network

The Global Health Network logoEQUATOR are delighted to announce a new collaboration with the Global Health Network, in recognition of our shared goals of capacity building, and in improving the conduct and reporting of health-related research worldwide.

A key area of collaboration will be in education and training in low and middle income countries.  We will collaborate on further development of the GHN regional faculties – educational centres of excellence – providing on-the-ground support and training activities tailored to regional needs.

We will also work together to build on the GHN’s excellent online eLearning presence and IT infrastructure to create new accessible online certificated training courses.  These will complement the existing EQUATOR toolkits, our comprehensive database of reporting guidelines, and other resources for authors, editors, healthcare librarians, and teachers currently available on the EQUATOR website.

This new collaboration represents an exciting opportunity for the EQUATOR Network to extend its activities into new regions.  It particularly complements our existing partnership with the Pan American Health Organization, focused on raising research reporting standards in Latin America and the Caribbean.

The Global Health Network is based at the Centre for Tropical Medicine, part of the Nuffield Department of Medicine at the University of Oxford.

Download the Memorandum of collaboration between The Global Health Network and EQUATOR (PDF).