Introducing the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis Initiative: The TRIPOD Statement06/01/2015
Gary Collins, Associate Professor and Deputy Director of the Centre for Statistics in Medicine, University of Oxford introduces TRIPOD.
Decisions based on clinical predictions are routinely made throughout medicine and at all stages in pathways of health care. For example, in the diagnostic setting, predictions are made as to whether a particular disease is present informing the referral for further testing, initiate treatment or reassure patients that a serious cause for their symptoms is unlikely. In the prognostic setting, predictions can be used for planning lifestyle or therapeutic decisions based on the risk of developing a particular outcome over a time period. Yet, making a diagnostic or prognostic prediction is challenging and rarely based on a single risk factor, test result or symptom.
The multifactorial nature of making a clinical prediction makes it difficult for doctors to simultaneously and subjectively weigh multiple risk factors to produce a reliable and accurate estimate of risk. Furthermore, seeing relatively few cases and cognitive biases, it is unsurprising that numerous studies have shown that doctors are generally poor prognosticators.
However, increasingly doctors, often based on recommendations in national clinical guidelines, are using multivariable prediction models to support and guide the clinical decision-making process. A clinical prediction model is a mathematical equation that relates multiple predictors for an individual to the probability (or risk) that a particular disease or condition is present or will occur in the future. Well-known prediction models include the Framingham Risk score, Apgar Score, Ottawa Ankle Rules, EuroSCORE, Nottingham Prognostic Index and the Simplified Acute Physiology Score (SAPS).
Introducing The TRIPOD statement
The last 10-15 years have seen an explosion in the number of published articles describing the development (and occasionally the validation) of clinical prediction models. Particular clinical areas have seen considerable numbers of models being developed for the same outcome (e.g. diabetes, TBI, prostate cancer). Developing a prediction model can be very easy; using existing data (collected for a different purpose) and a statistical package. Clearly, this is an oversimplification, but cynically, many prediction models are developed, with no compelling clinical need and thus with little intention of ever being used, and are merely an easy publication to add to one’s curriculum vitae. In areas where there have been many models being developed, deciding which one to use is difficult. Which is made particularly more difficult, as many are developed under the guise of being a new discovery with existing models often ignored and rarely compared against.
To evaluate the methodological conduct and reporting we conducted a number of systematic reviews describing the development or validation of multivariable prediction models across different medical areas and found that reporting to be particularly poor. What was surprising when we analysed the results from these and other systematic reviews was the clear lack of crucial information being presented. Without full and transparent reporting, evaluating, even implementing and synthesizing the results (see CHARMS Checklist) from such studies is problematic.
At the most critical level, authors were developing models yet failing to provide information on the actual model so that other researchers were unable to use the model to test the model or use on their patients – which is clearly ludicrous, to go to the extent to develop a model yet fail to tell readers what the model is. Published articles with incomplete reporting are unusable – as noted in the recent Lancet series on reducing waste in research.
It was clear to us that authors, reviewers, editors and readers needed clear guidance on what issues should be included when describing the development and validation of a prediction model, which led us to develop the TRIPOD Statement.
The TRIPOD Statement is an annotated checklist (PDF) of items that arose from a systematic review of the literature, which was further reduced and refined following discussions during a 3-day consensus meeting in 2011 with international experts in prediction modelling (statisticians, epidemiologists, clinicians and journal editors). The resulting checklist comprises 22 items that were regarded essential for good reporting of studies developing or validating multivariable prediction models. Authors of published reports of studies describing the development, validation or updating of a prediction model should ensure that all items in the checklist are mentioned somewhere in the article.
The recommendations within TRIPOD are guidelines only for reporting research and do not prescribe how to develop or validate a prediction model. Furthermore, the checklist is not a quality assessment tool to gauge the quality of a multivariable prediction model, for which the upcoming PROBAST risk of bias tool will be available.
Many prediction models are (unfortunately) developed without involving either a statistician or epidemiologist, and therefore providing guidance that can be readily understood by study investigators of varying levels of technical (e.g. methodological) experience was of paramount importance. Furthermore, to motivate authors (also peer reviewers and editors) to use the checklist, we also aimed to keep the checklist as brief as possible, whilst ensuring all the key details are clearly reported.
Whilst the TRIPOD Statement is foremost guidance for reporting, we produced an accompanying and extensive 22,000-word Explanation & Elaboration article discussing not only rationale and example of good reporting but also methodological aspects for investigators to consider when developing, validating or updating a prediction model. Yet, one of the challenges we faced in providing such information on suitable methodology was to provide a balanced account of competing approaches, particularly as there is no clear consensus on many aspects in developing or validating a prediction model, and where the methodology in this field is continually evolving. Conscious efforts were made to be neutral by describing both advantages and disadvantages of various methods, cautioning against methodologically weak approaches without dictating how the study investigators should conduct their study.
To increase the visibility of the TRIPOD Statement we are co-publishing the article simultaneously in 11 leading general medical and speciality journals including the Annals of Internal Medicine, BJOG, BMC Medicine, British Journal of Cancer, British Journal of Surgery, British Medical Journal, Circulation, Diabetic Medicine, European Journal of Clinical Investigation, European Urology and the Journal of Clinical Epidemiology. We welcome other journals in endorsing TRIPOD, by including prediction model studies as a distinct study type, include TRIPOD in their instructions for authors and require authors to complete and submit a TRIPOD Checklist in their submission.
We have also developed a website (www.tripod-statement.org) where additional information, references, and PDF and Word versions of the checklist are available for download. Journals and organisations endorsing TRIPOD will be listed on the TRIPOD website. For announcements on TRIPOD related information, follow us on Twitter @TRIPODStatement.
The Centre for Statistics in Medicine (University of Oxford) is also home of the EQUATOR Network and will list TRIPOD among their list of key reporting guidelines, and it will be announced via the EQUATOR Newsletter, Twitter (@EQUATORNetwork) and LinkedIn.
We believe that if authors adhere to the TRIPOD Statement, that readers and potential users (of the model) will have a transparent and full account of all aspects of the prediction model study enabling them to critique and fully judge the potential merits of the model.
Gary Collins on behalf of the TRIPOD steering group (Gary Collins and Doug Altman, University of Oxford, UK; Karel Moons and Hans Reitsma, UMC Utrecht, The Netherlands)