Reporting guidelines under development for other study designs
Other study designs or clinical areas: (year of registration in brackets)
- Epidemiology, primary care, public health and clinical outcomes
- Artificial intelligence and machine learning, modelling, coding, informatics
- Reporting guidelines for clinical practice guidelines
- Laboratory, studies in vitro and using devices
- Education and training
- Health economics
- Surgery
- Qualitative or mixed-methods studies
- Alternative, complementary and traditional Chinese medicine
- Other study designs and clinical areas
- Other types of documents linked to journal articles
__________
Epidemiology, primary care, public health and clinical outcomes
ConPHES – Consensus-based Process evaluation reporting guidelines for public Health
Co-designing for child health (2021)
Guidelines for transparent reporting of healthcare-associated infection outbreaks (2021)
Guidelines for transparent reporting of healthcare-associated infection outbreaks (2021)
STARD-NBS – Standardized Reporting of Newborn Screening Outcomes (2023)
PROM-GRIP – Patient-Reported Outcome Measures – Guideline for Reporting In clinical Practice (2024)
CAST-D – Reporting Guideline for Case Study in Public Health and Medicine related to Disasters (2024)
STLER: Standards for Legal Epidemiology Reporting (2025) STLER: Standards for Legal Epidemiology Reporting (2025)
__________ __________
Artificial intelligence and machine learning, modeling, coding, informatics
POPCORN – NCD – Population Health Modelling Consensus Reporting Network for noncommunicable diseases (2019)
REMARK update – Reporting recommendations for diagnostic and prognostic factor studies (2021)
TILT – Three-Dimensional Model Reconstruction (2021)
Reporting Guidelines for Artificial Intelligence Research in Mental Health (2024)
CINEX – A reporting guideline for clinical information extraction studies (2024)
TRoCA – Transparent Reporting of Cluster Analyses (2024)
______ ______
Reporting guidelines for clinical practice guidelines Reporting guidelines for clinical practice guidelines
RIGHT-P Statement – An extension of RIGHT for clinical practice guideline protocols (2021)
RIGHT-MuSE: Reporting guidance for interest-holder engagement in practice guidelines (2025)
RIGHT2.0: Reporting Items for practice Guidelines in HealThcare (2026)
______ ______
Laboratory, studies in vitro and using devices Laboratory, studies in vitro and using devices
CRIS Statement – Checklist for Reporting In-vitro Studies (2017)
REPORT – Reporting Recommendations for Research on Human Tissues (2019)
Accelerometry reporting guidance (2022)
RATE-VR – Reporting of eArly-phase Trials Evaluating Virtual Reality applications in healthcare (2022)
INVIRTUE – Reporting Guidelines for Virtual Reality Intervention Studies (2024)
REFORM – REporting guideline For Organoids pRe-clinical experiMents (2025)
______ ______
Education and training Education and training
ReCoMuTe – A Checklist for Reporting Complex Multi-professional Healthcare Teamwork Training (2019)
TiDIRP – Training in Dissemination and Implementation Research and Practice (2021)
TRAST – Theory Reporting in heAlth- STudies (2025)
______ ______
Health economics Health economics
Checklist for the conduct and reporting of micro-costing studies in health care (2014)
CHEERS ClimatE – Consolidated Health economic evaluation reporting standards Climate Extended (2023)
______ ______
Surgery Surgery
SUVIRE – SUrgical VIdeo REporting Guidelines (2021)
______ ______
Qualitative or mixed-methods studies Qualitative or mixed-methods studies
CHAT – CHecklist for conversation Analysis reporTing (2018)
STREAM – Standards for Rapid Evaluation and Appraisal Methods (2022)
CORMMIX – Consolidated Checklist for Reporting Mixed-Methods Research (2023)
DeNote – A data note reporting guideline for qualitative health and social care research datasets (2024)
QRDC-RG – Qualitative Remote Data Collection Reporting Guideline (2025)
GRAMMS 2.0: Updating the good reporting of mixed methods study reporting guidelines (2025)
MENTOR: Mental Health Qualitative Reporting Guideline (2025)
______ ______
Alternative, complementary, and traditional Chinese medicine Alternative, complementary, and traditional Chinese medicine
Reporting Checklists for Diagnostic Criteria for TCM Syndromes (2021)
STRIMAM – STandards for Reporting Interventions in Moxibustion using Animal Models (2023)
RGAR-TCM – Reporting Guideline for Animal Research in the Field of Traditional Chinese Medicine (2023)
TREATS-RG – Transparent Reporting for Essential oil & Aroma Therapeutic Studies—Reporting Guideline (2024)
TAE-ACU – The Terminology criteria of Adverse Events for ACUpuncture (2024)
Blinding in Acupuncture Clinical Trials: guidelines for assessment and reporting (2024)
Standards of Reporting the Network Pharmacology of Chinese Medicine (2026)
Guidelines for ethical review of acupuncture and moxibustion clinical research (2026)
______ ______
Other study designs and clinical areas Other study designs and clinical areas
PR-Rx – Press Release Reporting Exemplar (2021)
CASSR/CARSR – Guidelines for Case Series/Case Reports with a Systematic Review (2021)
RESOME – Reporting Guidelines for Social Media Interventions (2021)
STEP – STandard reporting guideline of Evidence briefs for Policy (2021)
COMPARE – StatistiCal analyses and repOrting in cardiac output method coMPARison studiEs (2021)
PRECOG – PREdiction of COunterfactuals Guideline (2022)
TRIPOD-P – TRIPOD Statement for Protocols (2022)
REALISE – Improving the REporting of totAL dIet StudiEs (2022)
CaReD – Case Report Dentistry (2022) CaReD – Case Report Dentistry (2022)
GLOBAL – Guidance List for the repOrting of Bibliometric AnaLyses (2022)
Updating the Reporting Guidelines for Music-based Interventions (2023)
CORES – CO-creation REporting Standards for research (2023)
ECoHealth – Reporting Guideline for Assessments of the Environmental Consequences of Healthcare (2023)
SEB – Standards for EEG Biomarkers (2023)
PRECISE – The Preferred Components for Co-design in Research (2023)
SESAME – Standard Elements in Studies of Adverse events and Medical Error (2023)
Standardised Reporting of the Carbon Footprint of Clinical Pathways (2024)
GUIDE-Rehab – GUideline for Intervention DEscription in Rehabilitation (2024)
START-EDI – STAndards for ReporTing Equality, Diversity and Inclusion (2024)
SPARK – Scale Up Research Reporting Checklist (2024)
Reporting checklist for EuroQol instruments (2025)
IMPACT Framework – Reporting the impact of patient and caregiver involvement in health research (2025)
CheRIHoSS – Checklist of Reporting Items for Horizon Scanning Studies (2025)CONSENS – Recommendations for improving reporting of CONtextual analySis in implEmeNtation Science (2025)
SHIME – Standards for Healthcare Impact Evaluation (2025)
CROP – Checklist for RepOrting Process evaluations in healthcare (2025)
WATER – Guideline for Wastewater Analysis and Tracking in Epidemiological Reporting (2025)
GRRAS-COSMIN: Guidelines for Reporting Reliability and Agreement Studies-Consensus-based Standards (2025)
CORE: Community Engaged Research Reporting Guidelines (2025)
Reporting items for evidence summary (2025) Reporting items for evidence summary (2025)
CORTRE: COmplete Reporting for Transparent Reproducibility Efforts (2025)
COMET-Bib: Comprehensive Methodological Transparency for Bibliometric Studies (2026)
Evidence-Based Practice Policy Analysis and Program Evaluation Reporting Guidelines (2026)
REFORH: REporting guideline For ORganoids in Human clinical studies (2026)
Other types of scientific documents linked to journal articles Other types of scientific documents linked to journal articles
SPRINT: Standards for Presenting and Reporting clinical InterveNtions Televisually (2023)
Checklist for the conduct and reporting of micro-costing studies in health care (registered 9 June 2014, updated 2019 and 2021)
A protocol for this guideline was published in 2016: PMID 27707687.In April 2019, the group informed EQUATOR that the reporting guideline was still under development. A systematic review was published in 2021 in Health Economics Review journal.
-
- Contact: Dr. Jennifer Prah-Ruger, [email protected]
GLOBAL – Guidance List for the repOrting of Bibliometric AnaLyses (registered 18 November 2022)The Guidance List for the repOrting of Bibliometric AnaLyses (GLOBAL) plans to develop minimum guidelines for the reporting of bibliometric and scientometric analyses, helping to promote transparency and completeness in reporting bibliometric and related analyses and providing a framework for authors to report methods and results. According to the developers, especially over the past decade, there has been a growing number of bibliometric analyses published in the peer-reviewed literature, and of greatly varying quality. Despite this growth, few published articles provide guidance on how a bibliometric analysis ought to be reported, and to their knowledge, these articles have been written based on the opinions/experiences of different researchers, as opposed to best evidence-informed practices.The group proposes to perform a literature review and expert consultation to generate a series of candidate items, which will then be assessed by an international multi-stakeholder group in a multi-stage Delphi survey and refined through a checklist pilot. They started the work on the Summer of 2022 and plan to publish the guidance, as an open-access document, in 2024. They also plan to publish a protocol.
- Contact: Jeremy Y. Ng, Centre for Journalology, Ottawa Hospital Research Institute. E-mail: [email protected]
A reporting tool for evidence-based guidelines on Chinese medicine for public health emergencies: an extension of the RIGHT statement (registered 9 January 2023)
A group of Chinese researchers secured funding to conduct this study aiming to develop reporting standard for scientific reports on the formulation process of evidence-based guidelines for Chinese medicine in response to public health emergencies. This will be an extension of the RIGHT Statement, and the authors inform they obtained approval for the development from the RIGHT working group.
The work started in April 2022 with a literature review of existing clinical practice guidelines on Chinese medicine for public health emergencies, including a content analysis, a reporting quality assessment and a systematic review of methodological research, aiming at generating the initial checklist. In a second step, a semi-structured interview of potential users of the reporting tool, aiming at revising the checklist, will be conducted, followed by a Delphi exercise.
Developers plan to publish this reporting guideline in 2024, as an open access document.
- Contact: Xiaojia Ni, Guangdong Provincial Hospital of Chinese Medicine, and Yaolong Chen, from Lanzhou University. E-mails: [email protected]; [email protected].
STRIMAM – STandards for Reporting Interventions in Moxibustion using Animal Models (registered 9 January 2023)
Moxibustion is an external treatment modality of traditional Chinese medicine, based on burning moral a wooly mass next to accupoints to stimulate meridians. Studies using animals are conducted using moxibustion but their reporting can be suboptimal, as the size of the moxa column, the duration of moxibustion, the selection of acupoints, and the smoke processing failed to be described in sufficient detail.
A group of researchers from Shanghai are developing this reporting guideline for animal studies on interventions with moxibustion. They plan to reach the ARRIVE group to make it an ARRIVE extension. They started with a literature review for defining the draft checklist and lexicon to be used. After internal discussions, the group will conduct 2-3 rounds of Delphi to finalise the guidance. The reporting guideline is planned to be published before 2024, as an open access document.
- Contact: Huan-Gan Wu, Li-Ming Chen, Shanghai Research Institute of Acupuncture and Meridian, Yueyang Hospital of integrated Chinese and Weatern medicine, Shanghai University of Traditional Chinese Medicine. E-mails:[email protected]; [email protected]
Updating the Reporting Guidelines for Music-based Interventions (registered 9 January 2023)
Published in 2011, the Reporting Guidelines for Music-Based Interventions focuses on the description of the music intervention and is intended to be used in conjunction with methods-specific reporting guidelines (e.g., CONSORT, TREND, STROBE, PRISMA etc.). The guideline is now under review for updating. The project involves the conduction of two survey rounds with a pool of researchers on music and music-based interventions to reach consensus on the updated checklist items. The first round started on November 2022. The work is funded and the group plans to publish the update in 2023. Authors of the original guideline are involved.
- Contact: Sheri L Robb, Indiana University School of Nursing. E-mail: [email protected]
GRACE – Guidelines for the Reporting of A Clinical Electroencephalogram (EEG) in the medical literature (registered 27 February 2023)
The results of clinical electroencephalograms (EEGs) are frequently described in case reports and other types of studies in the scientific literature. These can add crucial diagnostic and prognostic information. However, the way in which such reports is described is very variable.
The group leading the GRACE reporting guideline development aims to provide guidance on what information authors should provide about a clinical EEG when it is written up for the medical literature. The guidance will help authors describing EEGs methods and results in trials, case reports and series and other observational studies, and diagnostic accuracy studies.
The group includes representation from clinical neurophysiology, neurology, neurosurgery, psychiatry and paediatric neurology. They already reviewed the literature and they plan to conduct a two-round Delphi survey, and a consensus meeting. They also plan to pilot the guideline before publication.
The GRACE reporting guideline development started in September 2022 still with no specific funding. They plan to publish in 2023 as an open access paper.
- Contact: Jonathan Rogers, Division of Psychiatry, University College London. E-mail: [email protected]
REMARK update – Reporting recommendations for diagnostic and prognostic factor studies (registered 2021; updated 5 February 2026)
The Reporting Recommendations for Tumor Marker Prognostic Studies (REMARK) guideline was published in 2005 to encourage complete and transparent reporting of studies that investigate the prognostic value of a tumor marker. The recommendations in REMARK can be applied more broadly to include diagnostic and predictive markers as well as markers outside the field of oncology. To increase uptake and improve reporting of any marker study, we will develop an updated and broader in scope, version of the guideline.
The updated guidance will be developed following published guidance from the EQUATOR Network, and will comprise five stages. In stage one, a list of potential items will be drafted by a steering group involving researchers with expertise in diagnostic and prognostic prediction. In stage two we will consult a diverse group of key stakeholders using a Delphi process to identify which potential items should be considered. Stage three will be a consensus meeting involving the steering group and other individuals with relevant expertise to consolidate and prioritise key items to be included in update guidance. Stage four will involve finalising the checklist, and writing the accompanying explanation and elaboration. In the fifth and final stage, we will disseminate the guidance via journals, conferences, blogs, websites, and social media. The updated recommendation will enhance reporting standards for diagnostic and prognostic marker studies across clinical fields.
- Contact: Gary Collins, University of Birmingham. E-mail: [email protected]
- Contact: Richard Riley, University of Birmingham. E-mail: [email protected]
- Contact: Ben Van Calster, KU Leuven. E-mail: [email protected]
CANGARU – ChatGPT and Artificial Intelligence Natural Large Language Models for Accountable Reporting and Use Guidelines (registered 29 March 2023)
The CANGARU checklist aims to provide a comprehensive evaluation of the use of ChatGPT and large language models (LLMs) in healthcare and scientific research and its impact on scientific writing. ChatGPT has gained attention in the medical and scientific community since its launch in December 2022 as a potential tool for decreasing workload while maintaining the workflow. However, its use has raised concerns about its impact on the integrity of scientific literature. CANGARU will focus on standardising the reporting of methods and results for clinical and scientific studies using ChatGPT and other LLMs. The tool will list the most relevant technical details a data scientist needs to report for future reproducibility.
The group developing CANGARU, including representatives from EQUATOR, COPE and editorial members of biomedical and scientific leading journals will conduct a systematic review of the literature, already registered in the PROSPERO database (CDR: CRD42023406875). The topics identified will be ranked through a Delphi exercise, and after that a team of experts will evaluate the guideline. The panel will include scientists, clinicians, researchers, statisticians, computer scientists, engineers, methodologists, and journal editors. The group plans to publish the reporting guideline in March 2024 as an open-access document, later translated to Spanish, French and Chinese.
- Contact: Giovanni E. Cacciamani, USC Institute of Urology, University of Southern California. E-mail: [email protected]
APOSTEL-AS – Advised Protocol for Optical coherence tomography (OCT) Study Terminology and Elements – extension for studies involving Anterior Segment OCT (registered 29 March 2023)
This project is for the development of an extension to the APOSTEL reporting guideline, specifically focused on the anterior segment structures in optical coherence tomography (OCT), such as corneal thickness, angle parameters, and quantification of inflammation. The ophthalmologists leading the project will form a panel of experts (authors of published studies reporting quantitative AS-OCT data) and conduct a Delphi study to evaluate APOSTEL recommendations. In a second stage, a consensus meeting online will review the list of items based on the changes proposed, and a second Delphi round will give opportunity to the panel to accept or reject items.
The development will start in May 2023, when the group expects to have responses from the original authors of APOSTEL. They plan to publish the reporting guideline on late 2025 as an open access document.
- Study protocol is available here.
- Nomenclature study results are available here.
- Contact: Ameenat Lola Solebo, National Institute for Health Research (NIHR); UCL Great Ormond Street Institute of Child Health. E-mail: [email protected]
CORES – CO-creation REporting Standards for research (registered 17 April 2023)
Co-creation is an overarching and encompassing approach, which is inclusive of the different stages of co-planning, co-management, co-design, co-production and co-evaluation. Co-creation research is based on Participatory Action Research where co-creators are considered co-researchers rather than participants. The group involved in the development of this reporting guideline explains that co-creation research uses methods “focusing more on creative tools to encourage innovation, change and disruptive thinking”. They aim to create reporting standards for co-creation research, as publications have highlighted the need for them. There may be some overlap with intervention development, service quality improvement, qualitative research and PPI (patient and public involvement).
This project, funded by Erasmus programme, will start with a review of reviews about co-creation, then a 3-4 round Delphi survey will be conducted with an international expert panel, consensus meetings and gathering of feedback from the panel. The final checklist will be translated, using robust methodology (including back translation) to five languages. The final document will be published until the end of 2024, as open-access papers. A website will be launched soon.
- Read the CORES Protocol
- Contact: Gemma Pearce, Coventry University. E-mail: [email protected]
ECoHealth – Reporting Guideline for Assessments of the Environmental Consequences of Healthcare (registered 24 May 2023)
The healthcare sector contributes to greenhouse gas and other environmental emissions, and there has been an increase in publications about the “sustainable healthcare space”, including healthcare products, services, and systems. However, there is no reporting standard for these studies. The ECoHealth project aims to increase the quality of publications to enable critical appraisal, interpretation and replication.
The development of this reporting guideline started in 2022, funded by Harvard Medical School, with a review of the literature, and a three-round Delphi survey, after ethics approval was obtained, is planned. The Delphi panel will include healthcare practitioners and environmental and industrial engineers, healthcare administrators, healthcare product regulatory agency representatives and journal editors. The group plans to publish the reporting guideline in 2024.
- Contact: Jonathan E. Slutzman, Massachusetts General Hospital, Harvard Medical School. E-mail: [email protected]
ORIOCE – Guidelines for transparent reporting of Outbreak Reports and Intervention Of Community Epidemics (registered 4 July 2023)
Outbreak reports offer a timeline-based narrative of methods, findings, and public health responses in active or recently finished outbreaks or public health inquiries. These reports typically encompass the reason for the investigation, symptoms or pathogens involved, time and place of the outbreak, identification of affected individuals, as well as the investigation methods, results, and implications for public health. The ideal structure for these reports is a narrative format that sequentially presents the events. A group of physicians in the Department of Family Medicine Kangdong Sacred Heart Hospital, in South Korea, started developing this reporting guideline for reports of outbreaks of transmissible diseases and the community-level interventions to mitigate them. They have funding from Korea Disease Control and Prevention Agency and they plan to publish the guideline, as an open access document, in 2023.
- Contact: Soo Young Kim and Wonyoung Jung, Department of Family Medicine, Kangdong Sacred Heart Hospital, Hallym University, South Korea. [email protected]; [email protected]
CORMMIX – Consolidated Checklist for Reporting Mixed-Methods Research (registered 11 July 2023)
This project aims to develop a Consolidated Checklist for Reporting Mixed-Methods Research (CORMMIX) to enhance the reporting quality and transparency of studies using both qualitative and quantitative methods (mixed methods). CORMMIX will provide a structured framework covering all aspects of the research process, from planning and data collection to analysis and reporting. The first steps of the development started in March 2023, with literature review, generation of a pool of items to report, and a Delphi survey will follow. The team has funding from the Qatar university through a student grant, and they plan to publish the reporting guideline as an open access document.
- Contact: Muhammad Abdul Hadi, Department of Clinical Pharmacy and Practice, College of Pharmacy, Qatar University. E-mail: [email protected]
SEB – Standards for EEG Biomarkers (registered 14 August 2023)
Electroencephalogram (EEG) is a test used to help diagnose and monitor conditions affecting the brain, especially epilepsy. With the endorsement of the International Federation of Clinical Neurophysiology (IFCN) in November 2022, a group of 11 collaborators are developing this reporting guideline including literature reviews, a Delphi study (which has already started) and a final consensus meeting planned. The final product will guide authors, reviewers and editors toward specifying important technical parameters for reproducible diagnostic accuracy studies and prediction models.
The guideline will focus on studies specifically related to the development and validation of EEG-based biomarkers, including a number of methodological details related specifically to EEG (both during a cognitive task and spontaneous/resting-stata recording). The group declares, however, that EEG-based biomarkers are involved in a number of contexts-of-use beyond diagnosis: real-time monitoring of physiological changes, prediction of change to an intervention and prognostication of natural history, for example. The group plans to publish the guidance in 2023 in IFCN’ journal, Clinical Neurophysiology.
- Contact: Joshua Ewen, Kennedy Krieger Institute/Johns Hopkins University and Lurie Children’s Hospital/Northwestern University. E-mail: [email protected]
STARD-NBS – Standardized Reporting of Newborn Screening Outcomes (Registered 10 October 2023)
This extentson to STARD will clarify reporting of studies of newborn screening. The rapid pace of advancement in newborn screening tests and diagnostic strategies has led to significant variation in how screening studies have been reported and makes both interpretation of individual studies and synthesis across studies difficult. This extension will clarify the elements reported, which will ultimately improve advancement of newborn screening.
The project is at an early stage. The development group plan to review the literature to identify opportunities to clarify reports of newborn screening and then use the review results to generate a list of items for discussion during a Delphi consensus process. The guideline will then be written and piloted. The group plan to publish the guideline as an open access document which will be a product of the Evidence Review Group for the U.S. Advisory Committee on Heritable Disorders in Newborns and Children.
- Contact: Alex R. Kemper, Nationwide Children’s Hospital, Columbus, Ohio, US, [email protected]
PRECISE – The Preferred Components for Co-design in Research (registered 7 November 2023)
The development of this reporting guideline started in September 2021, funded by CIHR (Canadian Institutes of Health Research) since March 2023, and led by a team in Toronto, Canada. The group states that they are developing a reporting guideline for research using co-design, involving designers and “people not trained in design”. The group defines “design” as “applying ideas, evidence and knowledge in novel ways” and explain that the new reporting guideline will not necessarily be “relevant to co-production or co-creation” (as in CORES under development). The resulting guideline will help authors to report on values and principles that are essential to the co-design process in health research, including complex interventions.In the development process, the group plans a Delphi process and consensus meetings for discussion, determination of the final items to be included and their wording. They plan to publish the reporting guideline in 2024.
- Read the protocol
- Contact: Sarah Munce, KITE-Toronto Rehabilitation Institute, University Health Network. E-mail: [email protected]
CHEERS ClimatE – Consolidated Health economic evaluation reporting standards Climate Extended (registered 22 November 2023)
This project aims to develop an extension of the CHEERS (Consolidated Health Economic Evaluation Reporting Standards) reporting guideline for studies where health economic evaluation provides information for calculating greenhouse gases, or the climate impact (climate costs) of health technologies, as carbon emissions affect health economic evaluations.
A group from the University of Bremen is finishing the development of a reporting guideline that aims at facilitating climate footprinting alongside health economic evaluation. They started the project in 2022 and plan to submit a paper for publication in the end of 2023.
- An early version of the project was presented at the German Society for Health Economics annual conference. An abstract in German is available via the link: https://www.dggoe.de/konferenzen/2023/programm/52/sitzung/176
- The group registered the project with OSF: https://doi.org/10.17605/OSF.IO/2AW8N
- Contact: Oliver Lange, University of Bremen, Institute of Public Health and Nursing Research, Department Health Care Management. E-mail: [email protected]
RGAR-TCM – Reporting Guideline for Animal Research in the Field of Traditional Chinese Medicine (registered 14 December 2023)
The available ARRIVE 2.0 reporting guideline helps researchers reporting in vivo experiments with animals. However, it is not focused on studies on traditional Chinese medicine (TCM). A group from the Center for Evidence-Based Medicine, Lanzhou University decided to develop a reporting guideline for research with animals but focused on the different sources, varieties, and processing methods of TCM, including detailed information on the composition and formula of medications.
The group will conduct a literature review, and will then run a Delphi survey and a face-to-face meeting in 2024. The project is funded as a Lanzhou Science and Technology Planning Project. The plan is to publish the reporting guideline as an open access document in 2025.
- Contact: Bin Ma, Institute of Evidence-Based Medicine, School of Basic Medicine, Lanzhou University. E-mail: [email protected]
SESAME – Standard Elements in Studies of Adverse events and Medical Error (registered 14 December 2023)
Literature reviews have already noted significant heterogeneity in the definitions and reporting of studies that focus primarily on adverse events and medical error. These studies have adverse events as their primary outcome, not additional variables in larger trials or observational investigations. A group in Washington University in St. Louis School of Medicine, United States, is developing a reporting guideline. Their aim is to help with the complete reporting and standardisation of definitions and key terms in these manuscripts, which will allow comparisons, longitudinal analyses, transparency and actionability of findings. The group started the development in March 2023, and they plan to publish the guidance as an open access document in 2024.
- Contact: Richard T. Griffey, Washington University. E-mail: [email protected]
SPRINT: Standards for Presenting and Reporting clinical InterveNtions Televisually (registered 21 December 2023)
The SPRINT guidelines are under development to help authors of scientific manuscripts generate televisual information for scientific content such as videos linked to journal articles. The videos focused in this guidance are of clinical interventions (for example, surgery) and a subset of health research videos, including case reports. The authors expect the guidance will be integrated in journal guidelines for presenting televisual information. They aim to develop guidelines for the reporting of videos of clinical interventions by health researchers to ensure high quality and technically accurate videos which can improve communication, enable replication, safely inform, educate, enhance learning curves and drive innovation in healthcare.
The developers plans are to evaluate the current reporting landscape on available videos, define reporting and quality standards for these videos via interviews with stakeholders and a Delphi exercise, and evaluate the impact of implementing the standards. They plan to publish the guidance in the end of 2024.
- Contact: Hutan Ashrafian, Imperial College London and Leeds University Business School. E-mail: [email protected]
THESES-M: Transparent and Holistic Evaluation Standards for Educational Studies in Medicine: A reporting guideline for medical theses (registered 23 May 2025)
Medical theses are a core component of health education programs, particularly at undergraduate and postgraduate levels. However, there is currently no international reporting guideline that supports the transparent, structured, and reproducible development of these academic works. Medical theses often lack uniformity in methodological reporting, omit key ethical or analytic details, and face difficulties in dissemination or conversion into publishable manuscripts.THESES-M (Transparent and Holistic Evaluation Standards for Educational Studies in Medicine) is a reporting guideline designed to guide students, supervisors, and reviewers in the development and assessment of medical theses. The guideline includes a checklist of approximately 30 items covering essential sections of the thesis (e.g., title, introduction, methods, results, discussion, ethics). Each item is developed based on critical analysis of existing guidelines (e.g., STROBE, CARE, PRISMA) and adapted to the pedagogical and methodological context of medical education.This guideline addresses a currently unmet need and aims to improve the quality, clarity, and impact of medical thesis reports across institutions and countries. By providing a structured framework, THESES-M contributes to research literacy, promotes good scientific practices, and facilitates potential publication of high-quality student research.
- Contact: Geovani López Ortiz, Faculty of Medicine, National Autonomous University of Mexico (UNAM). Email: [email protected]
TREATS-RG – Transparent Reporting for Essential oil & Aroma Therapeutic Studies—Reporting Guideline (registered 5 January 2024)
Development of this reporting guideline started in 2021. Systematic reviews have shown that the evidence for the beneficial effect of aromatherapy in healthcare is inconclusive, as many published studies lack enough description of essential oils, carrier methods, and the intervention. This consensus-based reporting guideline will address this gap and is designed to be used in addition to other reporting guidelines such as CONSORT, STROBE, and CARE. The group developing this reporting guideline (ARQAT, www.arqat.org) plans to review the literature, conduct a Delphi survey and face-to-face consensus meeting and publish the guidance open access by the end of 2025.
- Contact: Marian Reven, West Virginia University and Aromatic Research Quality Appraisal Taskforce (ARQAT). E-mail: [email protected]
Reporting Guideline for Clinical Practice Guidelines for Medical Nutrition Therapy: An Extension of the RIGHT Statement (registered 16 January 2024)
This project will develop a guideline for reporting clinical practice guidelines (CPGs) for interventions involving nutrition therapy. The tool is planned as an extension of the RIGHT Statement. The project leaders, based on Evidence-Based Nutrition Research Team of West China Hospital, Sichuan University, plan to first run a literature review, searching for reporting guidelines in nutrition and for CPGs (such as RIGHT) and also articles assessing the quality of CPGs in nutrition. Then they will run Delphi surveys and conduct a consensus meeting to finalise the checklist. Funding support was obtained from West China Hospital of Sichuan University. The group plans to publish the reporting guideline as an open access document in 2025.
- Protocol: https://osf.io/f3x2z/?view_only=680430078b1f42319ad62c69820e5ea3
- Contact: Mengyan Wang, Department of Clinical Nutrition, West China Hospital, Sichuan University. E-mail: [email protected]
Standardised Reporting of the Carbon Footprint of Clinical Pathways (registered 25 March 2024)
The objective of this guideline is to improve the quality and comprehensiveness of reporting on the carbon emissions associated with clinical care pathways, in particular, the reporting of pathways with a component of remote care, often facilitated by the use of digital technologies.
The development started in January 2024, led by a group from Leeds Teaching Hospitals Trust and Brighton and Sussex Medical School. They clarify that the difference between their project and another already under development, ECoHealth, is that ECoHealth is looking to evaluating products in healthcare, whereas this new project will cover pathways. They argue there is a need to “standardise what components are included in measuring the carbon footprint of a clinical pathway. That will include the products, medications, clinical appointments etc in that pathway.”
A Steering Group has been formed, aiming to produce a preliminary checklist by the end of 2024, to be published by 2026.
- Contact: Sheryl Wilmott, Leeds Teaching Hospitals Trust, and Mahmood Bhutta, Brighton and Sussex Medical School. E-mails: [email protected] and [email protected]
INVIRTUE – Reporting Guidelines for Virtual Reality Intervention Studies (registered 28 June 2024)
Virtual reality (VR) is increasingly being used in healthcare as an (adjunct) intervention to treat patients. Therapeutic VR could be used for a variety of treatment goals, including patient education, distraction or exposure therapy. The developers plan to produce reporting guidelines for therapeutic VR interventions for patients in healthcare. The group has completed a systematic literature review to identify potential items for the reporting guidelines, based on >100 scientific publications of VR in healthcare studies. Preparation for a modified, three step, e-Delphi study among experts is underway and this will be followed by a consensus meeting to finalise the guideline.
The developers plan to publish this reporting guideline in 2025, as an open access document.
- Project details: https://osf.io/hfp4m/
- Contact: Syl Slatman, HAN University of Applied Sciences, Nijmegen, the Netherlands. Email: [email protected]
REFORM – REporting guideline For Organoids pRe-clinical experiMents (registered 27 June 2025)
Organoids, rapid developed from 2009 since small intestinal organoid successful cultivation,are three-dimensional (3D) miniature structures cultured in vitro produced from stem cells that recapitulate the cellular heterogeneity,structure, and functions of human organs. As they are similar to the original organs and carry human genetic information,organoids hold great promise in pre-clinical experiments compared with traditional animal models and two-dimensional (2D) cell lines, which can alleviate the contradiction between the long culture period and the poor stability and accelerate the process of clinical transformation.In recent years, organoids have been widely applied in the pre-clinical research of biomedical research especially tumorigenesis, establishment of disease models, preclinical drug testing or drug screening as well as personalized regenerative medicine and gene repair.
However, no comprehensive standardized guidelines of reporting on pre-clinical organoids research have been proposed. This may properly lead to omission of important information and key features or inconsistencies in reporting,which can further inhibit reproducibility.
Hence,to improve the quality and transparency of organoids pre-clinical research, we aim to develop the REFORM checklist, which will provide the guidance on reporting core elements for the organoids pre-clinical researches.
The developers plan to publish this reporting guideline in 2026, as an open access document.
- Contact: Yuting Duan (email: [email protected]) or Zhirui Xu (email: [email protected]), Evidence-based Medicine Center (Clinical Research Center), the Affiliated Traditional Chinese Medicine Hospital, Guangzhou Medical University
GUIDE-AI – Reporting Checklist on the use of Artificial Inteligence in the Health Guideline Enterprise (registered 2 July 2024)
In the specific context of the guideline enterprise, AI may support the planning, development and adaptation, reporting, implementation, impact evaluation, certification and appraisal stages. For example, AI tools have the potential of (i) supporting the analysis and synthesis of evidence necessary for the formulation of recommendations, (ii) supporting the assessment of the certainty of evidence, (iii) generating output complementary to that of guideline panel members on question generation, or (iv) improving the quality of dissemination materials and tailoring them to different target audiences. However, there is a need for standardisation and transparency in the reporting of the use of these AI tools and processes. The developers therefore plan to produce a reporting guideline for reporting of AI use in the whole guideline entreprise (from the planning of guidelines to their implementation and evaluation).
The group plan to conduct (i) a scoping review on the use and reporting of AI tools in the guideline enterprise, (ii) draft reporting items based on gathered information, (iii) compare reporting items attained with Guidelines International Network (GIN) extension checklist on use of AI in the guideline enterprise and add relevant subjects and items, (iv) meet with relevant stakeholders (v) evaluate the relative importance of the reporting checklist items and vote on those relevant to the reporting of AI use and (vi) write a manuscript and explanation and elaboration document for publication in a scientific journal.
Work on the project began in June 2024 and the group plan to publish the new guideline as an open access document.
- Contact: Holger Schünemann, Humanitas University, Milan, Italy. Email: [email protected]
A framework for the identification and reporting of modifications to surgical procedures: The Surgical Modification and Reporting Tool (SMART) Checklist (registered 2 July 2024)
Innovation in surgery plays an important role in improving outcomes from these procedures and subsequent advances in care. Innovation must be evaluated and reported in an accurate and standardised way to enhance incremental learning between surgeons and centres, reduce risk and avoid stifling innovation. Guidance on evaluating and reporting surgical innovation (such as the IDEAL framework) recognises the importance of reporting modifications but there is a lack of consensus on how to identify, report and share them.
The developers plan to produce a reporting guideline to allow stakeholders to identify and report modifications to surgical innovation and are currently conducting a systematic review to identify primary literature reporting innovative procedures and devices. The findings of this systematic review and in-depth qualitative analysis will be used to develop a provisional modification reporting checklist. The second phase of the research will be to undertake think-aloud interviews with key stakeholders to optimise the reporting tool.
Work on the project began in 2023 and the group plan to publish the new guideline as an open access document.
- Contact: Dr James Olivier, University of Bristol, Bristol, UK. Email: [email protected]
GUIDE-Rehab – GUideline for Intervention DEscription in Rehabilitation (registered 24 July 2024)
During the development of the RCTRACK Guideline, the expert Consensus identified the need of a specific guideline for interventions’ description for all the different study designs. It was decided to develop the GUIDE-Rehab Guideline as an accompanying guide for RCTRACK. The development of GUIDE-Rehab followed the same methodology as the RCTRACK in the first phases, and then was developed autonomously in the final Delphi processes. GUIDE-Rehab will be a smaller checklist focused on rehabilitation interventions in all study designs, including randomised clinical trials (RCTs) which are the main focus of RCTRACK. The Delphi rounds have completed and the group is ready to publish the reporting guideline soon, as an open access document.
GUIDE-Rehab should be used together with the RCTRACK Guideline when RCTs are concerned, as an additional help for a careful description of the intervention (RCTRACK will refer to GUIDE-Rehab for intervention description). It should be used as an independent guideline for all other study designs concerning rehabilitation interventions.
The project did not receive specific funding. Cochrane Rehabilitation supported the development of the guideline in total autonomy.
Published materials about GUIDE-Rehab:
- Levack WM, Malmivaara A, Meyer T, Negrini S. Methodological problems in rehabilitation research. Report from a cochrane rehabilitation methodology meeting. Eur J Phys Rehabil Med. 2019 Jun;55(3):319-321. doi: 10.23736/S1973-9087.19.05811-8. Epub 2019 Apr 15.
- Negrini S, Meyer Psy T, Arienti C, Malmivaara A, Frontera WR; Cochrane Rehabilitation Methodology Meeting participants. In Search of Solutions for Evidence Generation in Rehabilitation: The Second Cochrane Rehabilitation Methodology Meeting. Am J Phys Med Rehabil. 2020 Mar;99(3):181-182. doi: 10.1097/PHM.0000000000001374.
- Arienti C, Armijo-Olivo S, Minozzi S, Tjosvold L, Lazzarini SG, Patrini M, Negrini S. Methodological Issues in Rehabilitation Research: A Scoping Review. Arch Phys Med Rehabil. 2021 Aug;102(8):1614-1622.e14. doi: 10.1016/j.apmr.2021.04.006. Epub 2021 May 11. PMID: 33989598.
- Contact: Stefano Negrini, Department of Biomedical, Surgical and Dental Sciences, University of Milan. E-mail: [email protected]
TAE-ACU – The Terminology criteria of Adverse Events for ACUpuncture (registered 29 July 2024)
Standardized adverse events terminology is crucial for consistent reporting and effective scrutiny of adverse events information. Unlike pharmacological therapy, acupuncture therapy presents some different types and attributes of adverse events.
This project will develop an exhaustive terminology criteria of adverse events for acupuncture, in which each terminology will include an English translation, severity classification, and precise explanations.
The group developing this guidance has planned the steps of their work:
- Set up a core executive group to provide constructive guidance for each stage of development of the terminology criteria and oversee the development. The executive group will include content experts for acupuncture, methodologists, medical terminology expert, medical English expert, and patients and public representatives.
- The executive group will deliberate and finalise the research scope of the terminology and the reference terminology blueprint.
- A literature research, Delphi questionnaire surveys, and other research methods will be used to methodically collect acupuncture adverse event terminologies.
- In accordance with the predetermined rules, standardise and normalise the collected terminologies; classify and grade the encoded terminologies.
- Establish English translations for all terminologies.
- Determine the definitions of all terminologies.
- Maintain and update the terminology criteria continuously.
The group plans to publish the guidance in 2027. They have not given information about funding for the project or whether the guidance will be published open access.
- Contact: Yan Shiyan, School of Acupuncture-Moxibustion, Beijing University of Chinese Medicine. E-mail: [email protected]
START-EDI – STAndards for ReporTing Equality, Diversity and Inclusion (registered 31 July 2024)
Integrating equality, diversity and inclusion (EDI) into clinical research is essential to ensure findings are reproducible, relevant and to address the needs of different communities. There are currently no universally accepted guidelines for the reporting of EDI in research, which can lead to inconsistent and inadequate reporting. While some EDI considerations are included in equity extensions of CONSORT and PRISMA, these extensions are specifically designed for equity-focused studies or reviews, and do not apply to all studies.START-EDI Steering Committee members from the NIHR Research Design Service (RDS) designed a toolkit (2022) to guide researchers on how to embed EDI in applied health and social care research. A scoping review was conducted where authors scored the reporting of EDI across papers describing studies on gastrointestinal cancers. They found that issues like structural inequality, public involvement and budgeting for inclusion, ethnicity, and socio-economic status had very low or null rates of reporting. Another systematic review will be conducted to define the essential components of EDI and strategies for optimising diversity and inclusion. The group will then conduct a Delphi survey and a final hybrid consensus meeting.The checklist created will be piloted to ensure usability, and the group plans to produce translations and a strategy for dissemination. Development started in January 2024, and the group has applied for funding (from the MRC). They have partial infrastructure and PhD funding from the NIHR. A protocol was submitted for publication as a journal article. They plan to publish the final reporting guideline in 2026, as an open access document.
- Protocol registration in the Open Science Framework: https://osf.io/8udbq/.
- Contact: Michael Fadel, Department of Surgery and Cancer, Imperial College London. E-mail: [email protected]
SPARK – Scale Up Research Reporting Checklist (registered 31 July 2024)
Scale up is an important aspect of implementation research, aiming to scientifically explore the widespread, sustained adoption of proven interventions to create a generalisable knowledge base. Typically, intervention implementation has already been piloted at a smaller, local scale, while scale up research refers to the study of processes and factors that influence the widespread, sustainable adoption of evidence-based interventions, practices, or programs beyond their initial pilot settings.An international group of implementation research and global health researchers is working to develop a reporting guideline for scale up studies. An initial scoping review and a brainstorming session were conducted in 2023. A Delphi survey is planned. The Global Alliance for Chronic Diseases (GACD) is supporting the work by providing coordination and logistics and funding open access publications. The group plans to publish the reporting guideline in 2025, as an open access document.
- Contact: Gina Agarwal and Jasdeep Brar, McMaster University, Canada. E-mails: [email protected] and [email protected]
Reporting Guidelines for Artificial Intelligence Research in Mental Health (registered 5 August 2024)
The aim of this project is to develop and propose a new comprehensive reporting guideline tailored to reflect the particularities of artificial intelligence (AI) research in mental health clinical practice (e.g., diagnosis, prognosis and treatment outcomes and allocation).
The group plans to conduct a scoping review to develop a preliminary list of items, submit the list to a Delphi survey, refine and consolidate the reporting guideline, and then pilot test the tool and finalise. The project started in January 2024, and plans to publish the reporting guideline in 2026, as an open access document. No funding has been reported for this project.
- Contact: Catherine Hébert, McGill University. E-mail: [email protected]
PROM-GRIP – Patient-Reported Outcome Measures – Guideline for Reporting In clinical Practice (registered 5 August 2024)
Patient-reported outcomes (PROs) are aspects of a patient’s health status that are reported directly by the patient, without interpretation by a healthcare provider or anyone else, which are measured using Patient Reported Outcome Measures (PROMs). According to the developers of this reporting guideline, there is no consensus on what constitutes complete and sufficient reporting in studies of PROM use in clinical practice. Factors such as healthcare provider training in PROM use, how PROM scores are fed back, whether instructions are given on how to act on PROM scores, and whether the patient receives feedback on PROM scores are potentially important points to report. However, there is currently a great deal of variability in what studies report about the use of PROMs in clinical practice. The goal of the group is to improve the quality of reporting in studies that illustrate or evaluate the use of PROMs in clinical practice.
The guideline will be applicable to any study design in which PROMS are used. Development started in 2023, with a systematic review, and a two-round Delphi survey and a consensus meeting are planned, as well as piloting. An implementation and integrated knowledge translation plan will follow. The project is being developed without funding so far, but the group plans to publish the reporting guideline as an open access document in 2025.
CAST-D – Reporting Guideline for Case Study in Public Health and Medicine related to Disasters (registered 4 September 2024)
Case studies have long been a valuable research method across various disciplines, including medicine, sociology, economics, and management. Beyond individual clinical cases, case studies play a crucial role in managing specific organisations, communities, and events, particularly in the context of disasters. Despite the frequent publication of case studies, there is a significant gap in standardised reporting guidelines specific to case studies in public health and medicine related to disasters. While existing guidelines such as CONFIDE focus on reporting disaster events, they do not fully encompass the broader range of public health, long-term disaster impacts, and preparedness efforts. Moreover, disaster-related case studies cover a wide variety of cases, including organisations, communities, nations, policies, processes, and events, from the perspectives of observation, response, and management.The group leading the development of this reporting guideline are working to produce a dedicated reporting guideline for case studies in health research related to disasters, aiming to provide a checklist that ensures the thorough and accurate reporting of case studies by standardising the reporting process. The development started in 2022, with funding from the Japan Society for the Promotion of Science, Grant-in-Aid for Scientific Research. The group have reviewed the literature and completed both the 1st round Delphi survey in December 2024 and consensus meeting for the 1st survey in February 2025. They are currently preparing for the 2nd round Delphi survey. They plan to publish, as an open access document, in 2025.
- Contact: Yoshitaka Nishikawa, Department of Health Informatics, Kyoto University School of Public Health. E-mail: [email protected]
Blinding in Acupuncture Clinical Trials: guidelines for assessment and reporting (registered 16 October, updated 26 November 2024)
Placebo effects and patient expectations can significantly influence outcomes of acupuncture treatment. The leaders of this project consider CONSORT, a guideline for clinical trials, insufficient to adequately guide the reporting of the implementation of blinding procedures in acupuncture trials. They propose to develop a guideline that will support authors on reporting these procedures, and possibly help with their evaluation as well. According to the project leaders, “This guideline will provide clear instructions on how to report the methods and results of blinding procedures, including the assessment of blinding success and the comprehensive and transparent reports of blinding . The scope of the guideline will encompass the entire process of trial design, from the initial designing stages through to the final reporting of results.”
They plan to develop the guideline first establishing the project team, which will not include members of the public (patients). A Delphi survey is planned, as well as an expert consensus meeting and a pre-test of the checklist. The authors declare they have funding for the project from High-level Talent Programme of Beijing University of Chinese Medicine. They plan to publish the reporting guideline as an open-access document in 2027.
- Contact: Liu Tinglan, School of Acupuncture-Moxibustion and Tuina, Beijing University of Chinese Medicine. E-mail: [email protected]
ENLIGHT-Field: Field research extension of the ENLIGHT Checklist and Guidelines for daylight, electric and mixed field research (registered 16 October 2024)
A group of researchers based in Germany, Switzerland, and the Netherlands, are working to extend the ENLIGHT reporting guideline, from a checklist for laboratory-based studies to guidance that includes trials and observational studies. The leaders consider that field studies on the effects of light on human health should be completely and transparently reported. They got funding from the Daylight Academy and published a protocol for this project.
The study will use a three-stage consensus process, an online survey, an get feedback from Daylight Academy members for approval of the checklist. They plan to publish the guideline as an open-access document in 2025.
- The original ENLIGHT guideline publication
- ENLIGHT project website
- Daylight Academy website
- Protocol registration: https://osf.io/7z2bw/
- Contact: Manuel Spitschan, Max Planck Research Group Translational Sensory & Circadian Neuroscience, Tübingen, Germany. E-mail: [email protected]
CINEX – a reporting guideline for clinical information extraction studies (registered 18 October 2024)
An executive group of researchers in Switzerland is developing a reporting guideline for studies involving the extraction of information from clinical reports texts, with a focus on the use of natural language processing (or artificial intelligence) techniques. The lack of consistency in reporting can make it difficult to compare and replicate studies.
The group has already conducted a scoping review on the subject which identified the lack of a reporting guideline to support authors on what information to report. They have designed a checklist draft to undergo a Delphi survey with experts, and a consensus meeting. They aim to publish the reporting guideline as an open-access document in 2025.
- Scoping review
- Preliminary checklist draft
- Contact: Daniel Reichenpfader, Bern University of Applied Sciences, Institute for Patient-Centered Digital Health. E-mail: [email protected]
DeNote – A data note reporting guideline for qualitative health and social care research datasets (registered 23 October 2024)
Data notes, data papers, data articles, or data descriptors are a class of peer-reviewed article that succinctly describe how and why an archived research dataset was created and the nature and characteristics of research datasets stored in a public repository, with the purpose of enhancing research transparency and increasing the visibility of research datasets. However, according to the developers of this reporting guideline, there is no existing reporting guideline for data note articles describing qualitative health and social care research datasets. There is also little to no general writing guidance (in relevant scientific papers or on publisher or journal webpages) to authors of qualitative health and social care research who wish to produce such an article. The only existing guidance for producing data notes is exclusive to quantitative research datasets, which is not suitable for or applicable to qualitative research datasets. The DeNote reporting guideline will offer important guidance to authors producing data note articles describing qualitative health and social care research data.
The steering committee has representatives from the universities of Manchester, Galway, and the University College London, and they acknowledge funding from the University of Manchester’s ‘Research England Enhancing Research Culture’ fund, awarded via an Open Research Fellowship to Dr Hannah Long.
Their plans include a rapid scoping review of the literature, a survey with experts and public and patient representatives, a consensus workshop with private online voting, and another round of comments once the draft is ready. The study group has submitted an ethics application for the project. They plan to publish the reporting guideline, open access, in 2025.
- Contact: Hannah Long, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester. E-mail: [email protected]
TRoCA – Transparent Reporting of Cluster Analyses (registered 27 November 2024)
Data-driven cluster analyses are widely used to derive subgorups, and find natural clusters of individuals based on the individual characteristics. However, the critical appraisal and comparison of cluster analysis studies can be substantially hampered by poor reporting. Although comprehensive and well-written checklists are available for prediction model studies (e.g., TRIPOD+AI), no checklist has been specifically developed for studies incorporating cluster analyses.
Researchers based in Sweden decided to develop a new reporting guideline applicable to studies using machine learning/artificial intelligence. They plan to conduct a literature review and a Delphi survey. They plan to publish the guidance as an open access document in 2025.
- Study protocol is available here.
- Contact: Daniil Lisik, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg. E-mail: [email protected]
Alberta Quality Assessment Tool for Human Evaluation Studies of Large Language Model (LLM)-Based Question-Answering (QA) Systems: Reporting Checklist (registered 18 June 2025)
There has been a surge in research exploring the performance of large language models (LLMs) in answering questions within the medical domain. However, the quality of these studies varies significantly, making it challenging to draw reliable conclusions from the existing body of literature. While several quality assessment tools, such as reporting checklists and risk-of-bias frameworks, are available, they are not well-suited for evaluating studies focused on the human evaluation of LLM-based question-and-answer systems. These tools often fail to address the specific challenges and nuances involved in assessing LLM performance in medical contexts.
To address this gap, we aim to develop the Alberta Quality Assessment Tool (AQAT), a reporting checklist specifically for studies that evaluate LLM-based question-answer systems. The AQAT will ensure transparency and completeness in the documentation of LLM evaluations.
The scope of the AQAT will extend beyond general evaluation metrics, incorporating specific factors pertaining to question source, reference answers, evaluator selection, outcome domains and metrics, model explainability, and the generalizability of findings. By creating a dedicated tool for assessing studies on LLM-based systems, we aim to improve the reliability and reproducibility of research in this rapidly evolving field. Ultimately, the AQAT will enable researchers and healthcare professionals to critically assess the performance of LLMs, ensuring that only transparent and reproducible high-quality studies inform decisions related to their deployment.
- Contact: Carrie Ye,University of Alberta, Canada. E-mail: [email protected]
MuRMuR-AI – MultiReader Multicase Reporting (MuRMuR) Guidelines, including AI-Assisted Image Interpretation (registered 22 July 2025)
Multicase multireader studies (MCMRS) are essential for evaluating human diagnostic performance when interpreting medical imaging. However, to date there is no standardised reporting framework, leading to variability in study design, analysis, and interpretation. The increasing use of AI-assisted image interpretation technologies further complicates reporting due to added methodological considerations and detail required in terms of the development of the technologies being evaluated. These evaluations often form a key part of regulatory approval, but there is little consensus or comparability between studies of different AI-led technologies, even for the same use case. This study, using Delphi consensus methodology aligned with the ACCORD reporting guideline, aims to developed consensus-based reporting recommendations to enhance the transparency, reproducibility, and quality of MCMRS, particularly those evaluating the impact of AI-assisted image interpretation technologies on reader accuracy.
- Contact: Alex Novak, Oxford Clinical Artificial Intelligence Research (OxCAIR), Oxford University Hospitals NHS Foundation Trust. E-mail: [email protected]
TRIPOD-Code – a reporting guideline for code repositories associated with diagnostic and prognostic prediction model studies (registered 30 October 2025)
TRIPOD-Code is an extension to the existing TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) reporting guideline, focused on improving the reporting and transparency of code associated with prediction model studies. The scope of TRIPOD-Code includes all studies that develop, validate, or update multivariable prediction models for diagnostic or prognostic purposes, using any statistical or machine learning method. The primary aim is to provide structured guidance to ensure that the code underlying these studies is shared in a way that supports reproducibility, transparency, and reuse.While the TRIPOD and TRIPOD-AI guidelines encourage the sharing of analytical code, they do not provide detailed guidance on what constitutes sufficient or complete code sharing. When code is made available, it is often incomplete, undocumented, or difficult to reuse. This limits the ability of other researchers to verify results, assess methodological quality, or implement models in clinical or research settings.TRIPOD-Code addresses this gap by providing a checklist of minimum reporting items specifically focused on code availability, completeness, and usability. These items are expected to include documentation of software dependencies, licensing terms, file structure, testing procedures, and versioning or archival strategies.By developing TRIPOD-Code, we aim to help improve the quality and accessibility of code in prediction model studies.
- Contact: Tom Pollard, Massachusetts Institute of Technology, Cambridge, USA. E-mail: [email protected]
RIGHT-MuSE: Reporting guidance for interest-holder engagement in practice guidelines (registered 13 January 2025)
Guideline developers have increasingly engaged various interest-holders in all stages of guideline development. However, reporting on interest-holder engagement in practice guidelines often lacks transparency. Questions such as which interest-holders were included, how they were recruited, and how they were engaged in the process are often not described in detail.
To address these gaps, the RIGHT working group will collaborate with the MuSE Consortium to generate an initial list of items, hold consensus surveys and a consensus meeting with 20 representatives to develop a standardised reporting tool for interest-holder engagement in practice guidelines (RIGHT-MuSE extension).
- Website: https://www.right-statement.org/
- Contact: Xuan Yu, Chinese EQUATOR Centre, Hong Kong Baptist University, Hong Kong SAR, China. Email: [email protected]. Elie Akl, Department of Internal Medicine, American University of Beirut, Beirut, Lebanon. Email: [email protected].
PRe-EquiP-LMICs – Guidelines for conducting and reporting global, participatory research in Pulmonary Rehabilitation to promote Equitable Participation from Low- and Middle-Income Countries (registered 6 February 2025)
The PRe-EquiP-LMICs group comprises diverse professionals from LMICs and HICs across the globe who aim to develop guidelines for conducting and reporting global, participatory research in pulmonary rehabilitation (PR) to promote equitable participation from LMICs.
The group will develop PRe-EquiP-LMICs guidelines following the EQUATOR methodological toolkit/guidance for developing reporting guidelines, including a systematic review (completed), a Delphi survey, and a consensus meeting. The group will develop a preliminary list of conduct and reporting items from their recently completed systematic review. A Delphi survey (involving stakeholders from the wider pulmonary rehabilitation(PR) community, physiotherapists, patients with chronic respiratory disease and/or their caregivers/representatives, and policy makers) will then be conducted in order to ascertain the importance of the included items. The guideline document (including a final checklist) will be finalised in a consensus meeting, along with the production of an explanation and elaboration document.
- Contact: Fanuel Meckson Bickton, Kamuzu University of Health Sciences, Malawi-Liverpool-Wellcome Programme. Email: [email protected]
QRDC-RG – Qualitative Remote Data Collection Reporting Guideline (registered 11 February 2025)
Remote methods of qualitative data collection are those characterised by physical distance between the researcher/s and participant/s, use of technology to communicate and where there is an in-person equivalent method. Whilst use of remote qualitative data collection methods has proliferated in recent years, there are currently no guidelines for reporting these studies. Existing reporting guidelines for qualitative research (e.g. COREQ, SRQR) do not include items on technologically mediated study designs, only technologies used to record data (e.g. dictaphones). While developing guidance for qualitative remote data collection, the research team noted that reporting of remotely conducted qualitative research studies is inconsistent and often lacks key details of the research design that can significantly impact data quality. These include choice of technology, visibility of researcher, what ‘counts’ as data (e.g. comments in chat/emojis) and the synchronicity of the data collection.
The QRDC reporting guideline (QRDC-RG) will focus on the reporting of data collection methods – from sampling through to data processing for analysis. It will complement existing guidelines for reporting qualitative research by focusing on research design decisions that are unique to remotely conducted and hybrid (remote and in-person methods combined) research. It will be broadly applicable across a range of qualitative data collection methods where there is an in-person equivalent (e.g. interviews, focus groups, observation). The research team plan to conduct a review of existing guidelines and evidence on the quality of remote research reporting. A diverse international group of researchers and stakeholders will be appointed and a Delphi exercise and consensus meeting will be held to develop the checklist items. The team aim to work with qualitative health journals to secure endorsement and to support the adherence of authors and reviewers. Training and a launch webinar will be developed and the guideline will be translated into several languages to support international uptake.
- Contact: Professor Felicity Boardman, Warwick Medical School, University of Warwick, UK. Email: [email protected]
Reporting Guideline for the Participatory Development & Evaluation of Digital Health Interventions (registered 23 May 2025)
The literature shows a wide variety and heterogeneity of evaluation and incorporation practices, as and under-reporting of both participatory approaches and evaluation of digital health interventions (DHIs). Several existing guidelines cover different aspects of health research, but a guideline for the reporting of the participatory development and participatory evaluation of DHIs is lacking. Therefore, a new reporting guideline is needed to facilitate the systematic development and evaluation of DHIs and to promote best practices in participatory research.
- Protocol: https://osf.io/vc39r
- Contact: Vera Weirauch, Health Informatics, Faculty of Health,
Witten/Herdecke University, Witten, Germany. Email: [email protected]
Reporting Guideline for the Participatory Development & Evaluation of Digital Health Interventions (registered 23 May 2025)
The literature shows a wide variety and heterogeneity of evaluation and incorporation practices, as and under-reporting of both participatory approaches and evaluation of digital health interventions (DHIs). Several existing guidelines cover different aspects of health research, but a guideline for the reporting of the participatory development and participatory evaluation of DHIs is lacking. Therefore, a new reporting guideline is needed to facilitate the systematic development and evaluation of DHIs and to promote best practices in participatory research.
- Protocol: https://osf.io/vc39r
- Contact: Vera Weirauch, Health Informatics, Faculty of Health,
Witten/Herdecke University, Witten, Germany. Email: [email protected]
COREQ-LLM: Consolidated Criteria for Reporting Qualitative Research + Large Language Model Extension (registered 18 June 2025)
The proposed guideline aims to expand COREQ’s established framework for reporting qualitative research by addressing the emerging use of large language models (LLMs) across multiple stages of qualitative research. Although COREQ, endorsed by the EQUATOR network and cited more than 30,000 times, provides essential criteria for transparent, standardized qualitative research, it does not yet reflect recent advances in artificial intelligence (AI), especially LLMs.The integration of LLMs is rapidly transforming qualitative research. LLMs generate coherent, including why a new reporting token prediction. This enables their use at multiple research stages: from formulating research questions and designing interview protocols to supporting data processing (e.g., transcription and translation), facilitating data analysis (e.g., coding and theme identification), drafting manuscript sections, and enabling interactive exploration of qualitative datasets.The increasing adoption of LLMs stems from their potential to enhance analytical efficiency by rapidly processing large datasets, allowing broader and faster research investigations. Additionally, LLMs offer novel analytical perspectives, revealing latent semantic patterns, proposing thematic frameworks, and distilling complex narratives. These capabilities may complement human interpretive efforts.
Popular qualitative analysis software platforms, such as MAXQDA, now integrate LLM functionalities to leverage these advantages.However, the use of LLMs introduces notable methodological risks that require rigorous reporting standards. Unlike deterministic software, LLMs operate probabilistically, making their semantic comprehension uncertain, particularly concerning nuanced contexts. LLMs may also replicate biases present in their
training data, produce opaque internal reasoning, omit information, and generate plausible yet incorrect content (“hallucinations”). Specific model characteristics, including model type, version, training data, parameter settings, context window, and prompting strategies, substantially impact research outcomes. Together, these factors pose challenges to reproducibility, validity, and trustworthiness of qualitative research, revealing a critical gap in the existing COREQ guidelines. Currently, COREQ’s single relevant item (Item 27 “What software, if applicable, was used to manage the data?”) inadequately captures essential details needed to evaluate LLM integration comprehensively.To address this gap, we are developing COREQ+LLM, an extension to the original COREQ checklist designed explicitly for qualitative research employing LLMs. COREQ+LLM will introduce detailed reporting requirements for model selection, training parameters, prompting strategies, task definitions, evaluation metrics, and human oversight mechanisms. By providing clear and practical guidance, this extension aims to uphold rigorous, transparent, and ethically sound qualitative health research in an era increasingly shaped by AI.
- Protocol available here
- Contact: Priv. Doz. Dr. Leonard Fehring, School of Medicine, Witten/Herdecke University, Germany. Email: [email protected]
GRAMMS 2.0: Updating the good reporting of mixed methods study reporting guidelines (registered 10 October 2025)
In 2008, the Good Reporting of a Mixed Methods Study (GRAMMS) was introduced. However, GRAMMS is more than 15 years old and needs to be updated to reflect better current practices and innovations in the field of mixed methods research (MMR). Indeed, the field has advanced substantially in specific areas of integration, including integration strategies, assessment of the fit of integration, meta-inferences, joint displays, and other critical aspects, such as methodological quality and reporting, generalization, as well as cross-cultural MMR. Recent reviews in specific fields, including primary care, nursing, health services research, palliative care, and chiropractic services, for example, have examined empirical MMR studies and highlighted areas for improvement in the quality of reporting. Developing an updated GRAMMS guideline and checklist is one way to close this gap. It is essential to promote quality reporting of mixed methods studies in health services research and interdisciplinary fields so that researchers, practitioners, decision makers, and other relevant knowledge users have access to high-quality MMR study findings to inform practice and policy making.
- Contact: Sarah Munce, Holland Bloorview Kids Rehabilitation Hospital, Bloorview Research Institute, Toronto, Ontario, Canada . Email: [email protected]
LEVERAGE – GuideLines for rEporting the deVelopment, dEliveRy and evAluation of health interventions/policies Guided by commonly used systEms approaches (registered 28 February 2025)
Internationally, governments and inter-governmental organisations (particularly the WHO) are calling for systems approaches to tackle leading public health challenges such as obesity, food insecurity and health impact of climate change. Although an increasing number of interventions (led by governmental bodies or researchers) reported to have applied a systems approach systematic reviews have identified poor reporting (authors failed to report sufficiently and transparently how systems thinking or methods were used) and there is uncertainty around the exact meaning of “a systems approach”.To address this problem and advance the application and development of systems approaches in public health worldwide, Bai Li and colleagues published practical guiding questions to assist researchers, journal editors, research founders and practitioners to report and review public health interventions underpinned by systems approaches. Although the guiding questions were created based on latest academic knowledge and experiences of the authors who led relevant work in a number of countries, they were not developed through a Delphi study. Building on the published guiding questions, the LEVERAGE Executive Group aim to develop (through a rigorous process, including a Delphi study and an in-person international workshop) a guideline for reporting development, delivery and evaluation of health interventions/policies underpinned by systems approaches.The developers anticipate that the guideline will be applicable to any study design used for the development, delivery/implementation or evaluation/monitoring of health interventions/policies underpinned by one of the three most commonly used systems approaches (i.e. System Dynamics/SD, Agent Based Modeling/ABM, Social Network Analysis/SNA). Items specifically for AI and machine learning studies which are increasingly used in systems science will be included. The group plan to involve leading international organisations (e.g. System Dynamics Society, International Network for Social Network Analysis/INSNA, Open Modeling Foundation), public health journals and international experts in relevant disciplines and fields.
- Read an opinion article about the guiding questions
- Contact: Bai Li, Centre for Exercise, Nutrition and Health Sciences, School for Policy Studies, University of Bristol. Email: [email protected]
IMPACT Framework – Reporting the impact of patient and caregiver involvement in health research (registered 3 March 2025)
There are increasing efforts to involve patients and caregivers in health research. Various frameworks are available that address the reporting of patient/caregiver involvement across the different stages of research. However, there is no framework for describing the impact of patients/caregiver involvement in detail.The research team, therefore, plan to develop a framework for reporting the impact of patient and caregivers involvement in health research (IMPACT) which will include the impacts on the research conduct/process, and on patients/caregivers and researchers. Development will include a literature review, Delphi survey, an international stakeholder workshop and pilot testing of the framework and will involve patients/caregivers, researchers, funders and other relevant stakeholders.
- Contact: Allison Jaure or Javier Recabarre, The University of Sydney, School of Public Health, Australia. Email: [email protected] or [email protected].
STLER: Standards for Legal Epidemiology Reporting (registered 26 March 2025)
Structured legal data are a facet of many different kinds of scientific research, including mapping studies (measuring the distribution and attributes of law over space and time), legal implementation studies, and evaluations of the effects of law on health or related outcomes. Current research frequently fails to provide the methodological transparency as to the legal research and measurement procedures needed for studies to meet basic scientific standards. Many published studies in which law is an exposure or other key variable lack sufficient detail on how laws and policies are defined and measured, leaving audiences unable to evaluate the validity of findings.The development group aim to produce reporting guidelines for legal epidemiology studies. The Delphi method will be used to explore consensus on potential items to be included in the reporting standards. An online panel of experts will be recruited to rate the importance of individual reporting standards in several questionnaire rounds, with each questionnaire refined based on the feedback from respondents on the previous round. This process will be used to inform the list of reporting standards to be considered for inclusion in the guideline at a consensus meeting.
- Website: https://phlr.temple.edu/our-work/projects/legal-measurement-methods-consortium
- Contact: Scott Burris, Professor and Director, Center for Public Health Law Research, Temple University Beasley School of Law, US. Email: [email protected]
CheRIHoSS – Checklist of Reporting Items for Horizon Scanning Studies (registered 23 May 2025)
Horizon scanning is a method used to systematically detect signs of future developments. In health and care is it used to inform decisions about new and emergining innovative health and care technologies before these technologies become available and evidence is generated. The use of horizon scanning in health and care decision-making is gaining traction. Current applications of horizon scanning in the health and care context include: predicting the impact of new technologies on health and care services, regulatory frameworks or policies; identifying future developments, gaps, trends or business opportunities; and helping with adoption and diffusion of new technologies. Furthermore, horizon scanning is already established as a crucial step in the health technology assessment (HTA) process as a method to detect innovative technologies and issue early alerts to HTA bodies ahead of these technologies being placed on the market. There are other multiple uses and applications of horizon scanning methods in the healthcare context. This variability and broad application warrant the need for transparent and robust methods reporting. Currently there is an absence of guidelines for reporting the methods used in these studies. A lack of clear standardised terminology also means that study methods are often unclear.This guideline aims to fill the methodological gap identified by providing a checklist and a glossary of terms that horizon scanning practitioners, researchers, peer-reviewers and journal editors can work with to improve quality and consistency of horizon scanning studies.This guideline and checklist is the result of years of practice in the field of horizon scanning in health and care.
- Website: https://osf.io/m39vw/
- Contact: Sonia Garcia Gonzalez-Moral, NIHR Innovation Observatory at Population Health Sciences Institute, Newcastle University, UK. Email: [email protected]
CONSENS – Recommendations for improving reporting of CONtextual analySis in implEmeNtation Science (registered 23 May 2052)
Contextual analysis is an important part of the implementation science methodology and pivotal in informing subsequent phases of implementation science projects, such as intervention development / adaption, choice of contextually adapted implementation strategies and interpretation of outcomes. However, existing reporting guidelines lack specificity regarding how contextual analysis should be reported. This deficiency leads to variations and gaps in reporting that turn contextual analysis into a “black box” and diminish its contribution over the course of implementation science projects. In order to strengthen the reporting of contextual analysis, this project aims to build a guideline for the reporting of contextual analysis in implementation science that can complement existing implementation science reporting guidelines (e.g. StaRI).
- Website: https://nursing.unibas.ch/de/forschung/forschungsprojekte/laufende-projekte/consens/
- Contact: Juliane Mielke, Institute of Nursing Science, Department Public Health, University of Basel. Email: [email protected]
SHIME – Standards for Healthcare Impact Evaluation (registered 23 May 2025)
From our scoping review, we found that at least over three hundred articles have proposed using an impact evaluation in the healthcare context and this has been increasing yearly. Furthermore, there are a few guidelines for a general context, such as from the World Bank Group by Gertler and local, health-specific ones, such as from the Brazilian Ministry of Health by Aragão and Mendes. Furthermore, according to our scoping review, over 90% of studies use quantitative methods and attempt to have a good comparison of treatment group, moreover these more rigorous approaches seem to be associated with more transparent and less biased publications.Creating these guidelines will help authors be able to publish impact evaluations for the healthcare context in a more standardized manner, this won’t only aid researchers themselves in their efforts but also improve the cycle of policy evaluation and its effectiveness, promoting efficiency and equity in health interventions.
- Contact: Lucas Reis Correia, Universidade de São Paulo. Email: [email protected]
CROP – Checklist for RepOrting Process evaluations in healthcare (registered 23 May 2025)
Complex interventions are commonly used in health services research. Currently, the focus is not only the primary outcome of the trial, but the study process including the design and execution of the intervention is equally important. A key role of a process evaluation is to investigate how an intervention can be implemented into practice and policy, when effective. On the other hand, a process evaluation can assist in identifying why an intervention is unexpectedly ineffective or has unanticipated consequences, and how it can be optimized . Process evaluations conducted alongside clinical trials are rare, particularly in the field of multidisciplinary therapy research for neurological rehabilitation. They are similarly scarce in other areas of rehabilitation research. Although there has been a recent rise in published research on theories and frameworks guiding process evaluations for complex interventions, the inconsistency in reporting makes it challenging for researchers to design or replicate these evaluations effectively. To address this gap, this study aims to create a checklist based on existing literature and further refined through a Delphi study. By standardizing the reporting process, this generic checklist will serve as a valuable tool for future research, helping more researchers conduct robust and replicable process evaluations.
- Contact: Lisa Cruycke, Vrije Universiteit Brussel. Email: [email protected]
WATER – Guideline for Wastewater Analysis and Tracking in Epidemiological Reporting (registered 23 May 2025)
Wastewater and Environmental Surveillance (WES) serves as an important tool for monitoring environmental and community health by analyzing wastewater. Historically, WES has been used to detect chemicals, illegal drugs, pathogenic microorganisms, and drug-resistant bacteria. Particularly during the COVID-19 pandemic, the demand for WES research became evident.Currently, due to the multidisciplinary nature of WES, the absence of specific guidance has resulted in varied approaches and data reporting across studies, making comparisons and broader analysis challenging. Unlike existing frameworks for observational research, WES involves unique aspects such as diverse data sources, complex detection methods, and specific data usage requirements, which need special attention.This guideline aims to provide a set of straightforward reporting standards for WES studies, covering essential areas like study design, sample collection, data analysis, ethical considerations, and interpretation of results. By following these standards, researchers can improve the quality and transparency of their work, facilitating better understanding and collaboration in public health efforts.
- Contact: Yoshitaka Nishikawa, Department of Health Informatics, Kyoto University School of Public Health. Email: [email protected]
GRRAS-COSMIN: Guidelines for Reporting Reliability and Agreement Studies-Consensus-based Standards for the selection of health Measurement Instruments (registered 18 June 2025)
The current version of the Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were published in 2011. Although the GRRAS are still widely applied, they are outdated compared to methodological developments regarding health research reporting guidelines in recent years. Substantial advances have been made regarding sample size determination and other aspects in reliability and agreement studies. In addition, the development of the current GRRAS did not follow the state-of-the-art guidance for developers of health research reporting guidelines, because that were not yet available at the time of GRRAS development. Therefore, an update of the GRRAS according to the current standards of reporting guideline development is needed.Additionally, this update aims to harmonize and align wordings and concepts with relavant related initiatives such as the COnsensus-based Standards for the selection of health Measurement Instruments (COSMIN), to improve usability and uptake of the updated guidelines. Recently, COSMIN developed a tool to assess the risk of bias of reliability and agreement/measurement error studies and of systematic reviews of Patient-Reported Outcome Measures. Similarly, Lucas et al. published a quality appraisal tool for studies of diagnostic reliability (QAREL), which also applies to reliability and agreement studies. Although study reporting and risk of bias assessments are fundamentally different, both approaches should be harmonized. Only what is reported can be assessed. Therefore, reporting guidelines must contain the items for which the risk of bias is assessed.
- Project page is here
- Contact: Merle-Marie Pittelkow, Charité – Universitätsmedizin Berlin
CharitéCentrum Prevention, Health and Human Sciences (CC1)
Institue of Clinical Nursing Science. Email: [email protected]
CORE: Community Engaged Research Reporting Guidelines (registered 25 July 2025)
Community-engaged research is a transformative approach to scientific inquiry that fundamentally reimagines the traditional research paradigm. Unlike conventional research methodologies, this approach deliberately integrates diverse interest holders as true co-equal partners throughout the entire research process. This model goes far beyond superficial consultation with community members, instead positioning community members as active co-creators of knowledge, research design, implementation, and interpretation of results. By centering the authentic experiences, perspectives, and expertise of community members, this approach generates research that is inherently more meaningful, contextually relevant, and potentially transformative.
Community-engaged research allows for the development of research questions that emerge directly from community needs and priorities, rather than being solely driven by academic or institutional interests. Consequently, the research methodologies become more nuanced, culturally responsive, and potentially more effective in addressing complex health and social challenges.Reflecting the growing recognition of its value, there are numerous efforts to expand the use of community-engaged research. However, researchers often lack clear, consistent guidance for implementing these complex collaborative approaches. The absence of standardized resources can lead to inconsistent practices that may range from well-intentioned but ineffective to potentially harmful. A lack of sensitivity to the ethical complexity of community-engaged research can ultimately undermine the very goals of meaningful community engagement. Stronger and more explicit guidance is needed to ensure that community-engaged research is rigorous, ethical, and effective.
Standardized reporting guidelines can fill this need. Reporting guidelines have significantly improved research rigor and transparency, efficiency of peer review, and enhance the usability of research. Developing robust, flexible reporting guidelines could serve as a critical tool for ensuring that community engagement is substantive, ethical, and truly reflective of community needs and priorities. By establishing clear standards for documenting engagement processes, power-sharing arrangements, and community benefits, such guidelines would not only enhance research quality and transparency but also facilitate the sharing of best practices across diverse fields and contexts. This approach promises to transform community-engaged research from a well-intentioned but inconsistently applied concept to a rigorous and genuinely collaborative approach to scientific research.Because community-engaged research is an approach rather than a specific study design, we envision that these new community-engaged research reporting guidelines will be used as a supplement to existing design-specific reporting guidelines (e.g. CONSORT, STROBE).
- Contact: Jessica Kersey, Washington University in St. Louis School of Medicine. Email: [email protected]
Reporting Items for Evidence Summary (registered 10 October 2025)
This reporting guideline is specifically designed for evidence summary articles in the healthcare field. Such articles serve as a vital bridge between research evidence and clinical practice, particularly supporting evidence translation projects and the development of practice guidelines. As a concise form of evidence synthesis, they distill complex research into actionable recommendations to meet the growing clinical demand for efficient knowledge translation. For example, in China, the annual number of published evidence summary articles rose to 311 by 2021, mostly in high-impact journals covering oncology, neurology, and other major clinical fields.Reasons for Developing the New Reporting Guideline1. Methodological inconsistencies in existing practices
Evidence summary articles often lack key methodological details, such as structured research questions, transparent search strategies, and clear evidence grading criteria—weakening their reproducibility and methodological rigor. Meanwhile, the existing “Critical Appraisal for Summaries of Evidence (CASE) tool”, though conceptually relevant, has not been adopted by major journals, leaving a gap in quality evaluation standards.2. Insufficient reporting standardization
Current existing reporting recommendations for evidence summaries include the following aspects:
a. JBI Evidence Summary: With rich experience in developing rapid review and evidence summary methodologies, JBI aims to complete evidence summaries within a week, using a fixed 6-part structure (title, question, clinical bottom line, characteristics of the Evidence, best practice recommendations, references).
b. Critically Appraised Topic (CAT): Organized around clinical questions, CAT provides both research appraisal and clinical relevance explanations. It follows 5 steps of evidence-based practice (EBP): “Ask, Search, Appraise, Apply, Evaluate”.
c. Framework by Weijie Xing (Fudan University): Derived from analyzing domestic and international evidence summaries, it includes 5 sections (background, methods, results, discussion, Summary).However, these frameworks only outline overall structures without specifying details of each reporting item. Moreover, their development did not follow the methodological guidelines for reporting standard formulation recommended by the EQUATOR Network. Additionally, key information is often missing—such as unclear basis for recommendation strength, undisclosed conflict-of-interest statements, and inadequate documentation of evidence integration methods—hindering readers’ assessment of research credibility and applicability.Escalating needs for practice and translation
As foundational inputs for clinical practice innovations and intervention development, poorly reported evidence summaries may lead to risks like misleading practice changes or suboptimal protocol design
- Contact: Dong Pang, Peking University School of Nursing, Peking University Health Science Center for Evidence-based Nursing: A JBI Centre of Excellence. Email: [email protected]
CORTRE: COmplete Reporting for Transparency Reproducibility Efforts (registered 14 November 2025)
Large-scale efforts investigating reproducibility, replicability, robustness, generalisability, or related concepts, whether across entire fields, research methods, study types, or journal articles, have an important influence on science policy, trajectory, and public trust in research. Examples of such “meta-studies” include large replication projects in psychology, cancer biology, experimental economics, as well as investigations into the robustness and/or
computational reproducibility of results published in specific journals. These studies draw conclusions beyond the immediate set of included studies (for example, at the level of research fields, research methods, or study types). Some of these meta-studies have reported alarming results, and their papers are frequently cited in calls for science policy reforms and interventions, such as mandates for data and code sharing or preregistration.
Research that informs policy must be of the highest possible quality, reliable, and trustworthy. The “meta-study” format we refer to here, is relatively new and rapidly evolving. Data from meta-studies are frequently re-analysed, e.g., using novel metrics and methods to aggregate and contextualise results. Metaresearchers might also want to synthesise results from multiple efforts to draw conclusions that extend beyond individual meta-studies (in a meta-meta-study, if you will). All of these endeavours depend on transparent, high-quality reporting. To our knowledge, no reporting guideline exists at present for meta-studies.
- Project page: https://cortre.github.io
- Contact: Rachel Heyard, Center for Reproducible Science and Research Synthesis, University of Zurich. Email: [email protected]
COMET-Bib: Comprehensive Methodological Transparency for Bibliometric Studies (registered 22 January 2026)
Bibliometric and scientometric analyses have become indispensable for mapping scientific landscapes, assessing productivity, identifying research gaps, and informing policy and funding strategies. Their relevance extends far beyond information and library sciences, contributing meaningfully to fields such as environmental science, medicine, and climate research, where bibliometric evidence increasingly informs decision-making. By providing this form of “knowledge scaffolding,” bibliometrics not only diagnoses strengths and weaknesses within research systems but also offers a strategic lens on collaboration, impact, and equity.The surge in bibliometric publications is driven by expanding access to databases, the proliferation of user-friendly software platforms, and the integration of artificial intelligence, including large language models, into research workflows. While this has democratised practice, it has also produced wide variation in reporting quality, with many studies privileging data-led outputs over methodological justification.The existing guidelines focus on a comprehensive description of methodology in text but do not provide guidance on presenting a workflow of a bibliometric study, as the PRISMA flow chart does for systematic reviews. The flow charts currently proposed are broad outlines that rely heavily on authors’ creativity, leading to inconsistent reporting. Additionally, PRISMA cannot be used without adaptation, since the workflow of a bibliometric study differs from that of a systematic review.Against this backdrop, a structured and comprehensive reporting guideline is urgently needed. A PRISMA-like framework tailored to bibliometric workflows would enforce methodological rigour, ensure transparent reporting of databases, tools, and analytical steps, and foster interdisciplinary consistency.The proposed guideline “COMET-Bib (Comprehensive Methodological Transparency for Bibliometric Studies)” will address this need. Its scope spans the entire research process: defining research questions, selecting and documenting data sources (e.g., Scopus, Web of Science), applying diverse methodologies (e.g., citation analysis, co-authorship mapping), integrating AI-driven tools for data processing, and presenting results with clarity and reproducibility. COMET-Bib will be designed to accommodate theoretical innovations, applied analyses, and science policy evaluations alike, ensuring relevance across both ILS and non-ILS domains. By promoting transparency, reproducibility, and comparability, COMET-Bib will strengthen bibliometrics as a diagnostic and strategic tool, prevent superficial accumulation of descriptive results, and secure its role as a foundation for advancing knowledge and informing policy.
- Contact: Namrata Dagli and Jestina Rachel Kurian, Center for Global Health Research, Saveetha Medical College, Saveetha Institute of Medical and Technical Sciences (SIMATS), Chennai, India. Email: [email protected] or [email protected]
Evidence-Based Practice Policy Analysis and Program Evaluation Reporting Guidelines (registered 22 January 2026)
These reporting guidelines define what to include when disseminating Evidence-Based Practice Program Evaluation (EBPPE) and Evidence-Based Practice Policy Analysis (EBPPA) initiatives. They cover the full arc of an initiative—from articulating the problem and local context, through external evidence review and internal data, to aims, methods, measures/analysis, results, discussion (including value determination), limitations, and conclusions—mapped to an IMRaD structure. The checklists specify 10 components and 19 criteria, and they align with the Mountain Model (Waldrop & Dunlap, 2024) umbrella framework so authors can integrate EBP with rigorous evaluation or policy analysis methods (Waldrop et al., 2026).The EBPPE guideline targets manuscripts reporting evaluation of programs, projects, pathways, or use of guidelines; the EBPPA guideline supports varied outputs but also journal articles. Together they enable transparent, complete, and reproducible reporting.New guidelines are needed because existing resources largely focus on research reporting and do not address the distinctive goals and methods of program evaluation (value determination) or policy analysis. Prior work offers only partial criteria for program evaluations and none for policy analyses, leaving authors without comprehensive, practice-grounded guidance. Inaccurate or incomplete reporting hampers learning, scale-up, and quality improvement across settings. By standardizing how EBPPE and EBPPA are reported—and by pairing with the related EBPQI guideline—these tools strengthen the integrity and usefulness of disseminated work, support consistent inclusion of essential components and accelerate translation of evidence into improved care and policies.
- Contact: Julee Waldrop, Duke University School of Nursing. Email: [email protected].
The Readability Checklist: A reporting guideline for studies assessing the readability of written health information (registered 22 January 2026)
The Readability Checklist provides a set of recommendations that should be included in studies assessing the readability of written health information. Its primary purpose is to improve methodological rigor, transparency, and reproducibility in this growing area of research.
Readability assessment is the most common used approach to revise and evaluate written health information. With the proliferation of readability formulas and automated tools, studies assessing readability have increased substantially across diverse health domains. However, recent reviews highlight significant variability in reporting practices, including incomplete descriptions of methods, lack of justification for formula and calculator selection, and insufficient detail on text preparation and analysis. These gaps hinder meaningful comparison across studies and limit the ability to replicate findings.
The need for this guideline was reinforced by our recent eDelphi study (under review), which sought to identify and gain consensus on research priorities in order to determine the role of readability assessment in health literate document design. One of the key priorities was the need to establish best practices for conducting and reporting readability analyses. Currently, no standardized reporting framework exists for this type of research. As a result, studies often omit critical information which may compromise transparency and lead to misinterpretation of results.
The Readability Checklist addresses these issues by outlining essential reporting items for studies assessing readability of health information. It is intended for researchers, health information developers, journal editors, and peer reviewers. By promoting consistent and comprehensive reporting, the guideline aims to enhance the quality of evidence and improve methodological rigour and reporting transparency.
- Contact: Olivia Mac, Sydney School of Public Health, Faculty of Medicine and Health, The University of Sydney. Email: [email protected].
REFORH: REporting guideline For ORganoids in Human clinical studies (registered 22 January 2026)
Organoids are three-dimensional structures derived from pluripotent stem cells, adult stem cells, or primary somatic cells, and retain key histological, molecular, and genetic features of their tissue of origin. In recent years, organoid technologies have increasingly been incorporated into human clinical studies, including studies in which patient-derived organoids are generated from clinical samples and used to investigate clinical response patterns, drug sensitivity, or associations between organoid-based readouts and patient-level clinical characteristics or outcomes. Moreover, organoid models align with the “3Rs” principle—Replacement, Reduction, and Refinement— and are increasingly explored as human-relevant experimental systems that may complement traditional animal-based approaches. Recent regulatory initiatives, including those from the U.S. Food and Drug Administration, have acknowledged the potential role of organoid-based and other in vitro methods as part of New Approach Methodologies aimed at reducing reliance on animal testing in drug development and translational research contexts.The increasing number of organoid-related clinical studies in recent years further underscores the need to establish a reporting guideline. As interest in organoid-based clinical research continues to grow, so does the demand for standardized reporting guidelines.However, no dedicated reporting framework currently exists for organoid-based clinical studies, increasing the risk of incomplete reporting and the prevalence of selective interpretation of findings. For example, clinical studies involving organoids typically require reporting on the organoid’s composition, source, derivation, and key experimental or analytical procedures. However, existing reporting guidelines for clinical research, such as CONSORT for randomized trials and STROBE for observational studies, do not include items addressing these organoid-specific considerations.Therefore, the project leads advocate for the development of a reporting guideline specific to organoid clinical studies. Such a guideline would not only improve the quality and transparency of organoid research reporting but also facilitate the translation of basic science into clinical applications, ultimately accelerating the advancement of personalized medicine and interdisciplinary innovation.
- Contact: Yuting Duan or Rong Zhang, Evidence-based Medicine Center (Clinical Research Center), the Affiliated Traditional Chinese Medicine Hospital, Guangzhou Medical University. Email: Yuting Duan or Rong Zhang
Informed Methodology for Publishing Advocacy Change and Transformation (registered 03 February 2026)
Advocacy in medicine represents a vital yet under-recognized career path and area of physician scholarship, including in pediatrics. Physician advocacy has been defined as “action by a physician to promote those social, economic, educational, and political changes that ameliorate the suffering and threats to human health and well-being that he or she identifies through professional work and expertise.” However, the outcomes reported from advocacy efforts often deviate from the traditional frameworks of original research or quality improvement (QI). This discrepancy in outcome reporting creates challenges to effectively conveying advocacy efforts in most peer-reviewed journals, despite these activities being widely considered as timely, impactful, and essential. As a result, advocacy efforts remain underrepresented in the peer-reviewed literature, and therefore, are often not considered in promotion criteria.
- Contact: Allison Black, University of Louisville/Norton Children’s Hospital. Email: [email protected]
RIGHT 2.0: Reporting Items for practice Guidelines in HealThcare (registered 01 April 2026)
More than nine years have passed since the publication of the first version of the RIGHT checklist (https://www.equator-network.org/reporting-guidelines/right-statement/). Recently, factors such as user feedback from guideline interest-holders (including guideline developers, journal peer reviewers and editors, guideline users, guideline researchers, and policymakers), continuous innovations and developments in the field of guideline methodology (e.g., living guidelines and rapid recommendations), and the global surge of artificial intelligence (AI) technologies have further accelerated the need to update the RIGHT. Taking AI technology as an example, large language models can play a key auxiliary role in guideline development, e.g. by assisting in the formulation of clinical questions and accelerating different steps of evidence synthesis like the development of search strategies, literature screening, data extraction, and risk of bias assessment. However, there are currently no specific recommendations on how to standardize the reporting of related processes. The rapid evolvement in methodologies has also given motivation to updates of other reporting and evaluation tools. Based on these considerations, the RIGHT Working Group has initiated the update process of the checklist with the aim of providing a more applicable guideline reporting standard.
- Contact: Yaolong Chen, Research Unit of Evidence-Based Evaluation and Guidelines, Chinese Academy of Medical Sciences. Email: [email protected]
- Project website: https://www.right-statement.org
The CLAIRE statement: a comprehensive reporting and assessing guideline for artificial intelligence in diagnostic imaging (registered 01 April 2026)
Based on literature reviews and prior experience, numerous inconsistencies have been identified in the reporting of diagnostic imaging studies, especially within dental radiology. Recognising that these same concerns apply broadly across all medical imaging, a new framework is urgently required. Existing tools such as CONSORT AI and SPIRIT AI distinguish themselves by narrow orientations centred on trials or protocols, often lacking the architectural depth to encompass the entire diagnostic workflow. The “Completeness, Learnability, Applicability, Interpretability, Reproducibility, and Evaluation” (CLAIRE) framework aims to bridge this gap. It is presented in three components: (i) a table detailing the importance of each featured concept, (ii) a checklist, and (iii) a grading score to assess the overall reporting quality of a manuscript. CLAIRE seamlessly integrates methodological rigour with real world clinical relevance. It operationalises its six core principles into a unified checklist that uniquely mandates the disclosure of raw results for critical appraisal and heavily emphasises interpretability for clinicians who are not experts in the field. Furthermore, by incorporating an objective scoring system, CLAIRE transcends basic reporting to provide a reproducible assessment tool for journal editors and peer reviewers, effectively bridging the gap between technical artificial intelligence development and safe, transparent clinical application.
-
- Contact: Evando Silva-Filho, Departments of Dental Radiology and Imaging & Endodontics, Faculty of Dentistry, University of Fortaleza, Fortaleza, State of Ceará, Brazil. Email: [email protected]
BRIDGE-AI reporting guideline for cross-design evaluation studies of artificial intelligence in digital health diagnosis, clinical decision support, and post-deployment monitoring. It stands for Bias Reduction for Implementation, Deployment, Governance, and Equity. (registered 01 April 2026)
BRIDGE-AI is a new, cross-design reporting guideline for evaluation studies of artificial intelligence (AI) in digital health, particularly diagnosis, prognosis, clinical decision support, workflow integration, and post-deployment monitoring. It is not conceived as an extension of a single existing guideline. Instead, it addresses reporting items that cut across study designs and evaluation phases and are currently dispersed across the AI literature.
The guideline is organised around eight workgroups: WG1 fairness and data representativeness; WG2 robustness and stress-testing; WG3 traceability, documentation and accountability; WG4 explainability and transparency; WG5 usability, human-AI interaction and automation bias; WG6 governance, ethical oversight and regulatory alignment; WG7 clinical and public health integration/workflow evaluation; and WG8 post-deployment monitoring, drift detection and lifecycle quality.
Candidate reporting items are also being grounded in ISO/IEC standards and technical specifications, including ISO/IEC TR 24027 and ISO/IEC TS 12791 for bias and unwanted bias treatment, ISO/IEC TR 24028 for trustworthiness and transparency, ISO/IEC TR 24029-1 and ISO/IEC 24029-2 for robustness assessment, ISO/IEC 23894 for AI risk management, and ISO/IEC TS 4213 for machine learning classification performance assessment.
A new guideline is needed because existing AI guidance on EQUATOR is largely design-specific or phase-specific—for example trials, protocols, diagnostic accuracy studies, prediction models, or early-stage clinical evaluation—whereas trustworthy digital health evaluation also requires consistent reporting of subgroup performance, oversight, lifecycle monitoring, workflow fit, and post-deployment drift. BRIDGE-AI will therefore complement existing guidance while remaining a standalone new reporting guideline entry.
-
- Contact: Hamid Reza Marateb, BIOsignal Analysis for Rehabilitation and Therapy Research Group (BIOART), Institute for Research and Innovation in Health (IRIS), Automatic Control (ESAII), Universitat Politècnica de Catalunya-BarcelonaTech (UPC), 08028 Barcelona, Spain. Email: [email protected]
- Project website: https://bridge-a-i.com/
Standards of Reporting the Network Pharmacology of Chinese Medicine (registered 01 April 2026)
Since 2010, the concepts and methods of network pharmacology have been applied in traditional Chinese medicine research, and they have played a significant role in identifying potential herbal therapies during the COVID-19 pandemic. In 2021, Li Shao developed the “Guidelines for Network Pharmacology Evaluation Methods”. However, due to the fact that this guideline focuses on methodological assessment and lacks detailed research report standards, methodological suggestions, and validation methods, it cannot effectively guide the reporting of network pharmacology research. Therefore, many reports on network pharmacology still rely on STROBE statements, CONSORT statements, and the 2017 CONSORT-CHM format statements as references for research design and reporting. However, these lists are mainly designed for reporting observational studies or clinical randomized controlled trials, and thus are not applicable to the analysis of traditional Chinese medicine network pharmacology. This has hindered the high-quality development of traditional Chinese medicine. This study referred to the relevant content of the “Guidelines for Network Pharmacology Evaluation Methods”, reviewed relevant literature in domestic and foreign databases, and combined with traditional Chinese medicine theory and the core concept of “syndrome”, summarized and integrated the key points of reporting in traditional Chinese medicine network pharmacology, thereby promoting high-quality reporting in this field.
-
- Contact: Lin Haixiong, Ningxia Medical University. Email: [email protected]
Guidelines for ethical review of acupuncture and moxibustion clinical research (registered 01 April 2026)
This standard is applicable to clinical research projects involving the human body initiated by researchers and involving acupuncture and moxibustion therapy as an intervention means. It can clarify the basic requirements and procedures of ethical review, provide clear guidance and norms for ethical review of acupuncture and moxibustion clinical research by formulating detailed standards, and ensure the fairness, transparency and traceability of the review process; It can also strengthen the protection of the rights and interests of research participants, ensuring that their informed consent rights, privacy rights, and autonomy of choice are fully respected and protected in the process of research design, implementation, and result reporting. It can also improve the quality and credibility of research, screen out research projects that meet scientific, reasonable and safety requirements through strict ethical review, improve the overall quality and credibility of acupuncture and moxibustion clinical research, and provide a reliable basis for the progress of acupuncture and moxibustion discipline.
-
- Contact: Zhao Meng, Xiyuan Hospital, Chinese Academy of Traditional Chinese Medicine. Email: [email protected]
-
Page last updated on 01 April 2026
