Enhancing the QUAlity and Transparency Of health Research
I recently attended the 8th Congress on Peer Review in Chicago (10-12 September 2017). Unusually this event takes place only every 4 years, so this was the 8th such Congress in a series starting in 1989. I’m one of very few people who have attended all 8 congresses (and no, I don’t know the difference between a congress and a conference).
The EQUATOR team were invited to present a satellite workshop on Saturday 9th. We rose to the challenge and decided to tackle the topic of what journals can do to implement reporting guidelines. The rather ambitious idea was to produce a one-page “Action Plan” for journals and publishers to kick-start activities to embed reporting guidelines more effectively. Read an excellent account of the workshop by Margeret Winker on the WAME blog. Many thanks to Jason Roberts and other correspondents who reported from the coalface of reporting guideline implementation, Allan Heinemann, Sabina Alam and Mario Maliki.
All eight meetings have been run over three days, are entirely plenary, and all contributed presentations have ten minutes with a further ten minutes for discussion. Those features help to give the meeting an unusual flavour.
The meetings are always referred to as peer review congresses but their scope has evolved over time. At the 1989 congress most of the talks were indeed about peer review, but other topics discussed included publication bias and fraud. Over time the programmes have included increasing numbers of presentations about the reporting of research. The field of publication has evolved rapidly over the 28 years, influenced massively by the internet and the introduction of digital publication. Underpinning much of the research is the consistent concern that much of the research published in journals is flawed, either in methodology or reporting. Recent years have seen those concerns broaden to include topics such as reporting guidelines, selective publication, abstracts, trial registration, with new issues this year including predatory journals and online preprints.
The broader focus has been accompanied by a change in the name this year from peer review and biomedical publication, to peer review and scientific publication
A great feature of this year’s congress was the large number of young attenders and indeed young presenters. Some hadn’t been born at the time of the first meeting in 1989.
For the third time the (almost) annual EQUATOR Lecture was held at the end of the middle day of the Congress. This year’s speaker was Patrick Bossuyt from Amsterdam, who gave an excellent presentation of diagnostic accuracy studies and reviewed the development, evolution and impact of the STARD Statement, the reporting guideline for such studies. Watch the 8th EQUATOR Annual Lecture (please scroll to 4:28 to hear Patrick’s talk).
I’ve been wondering what led me to attend the 1989 congress, also in Chicago. I had been doing reviewing for the BMJ for some years and since 1987 I had been attending some of the journal’s weekly manuscript meetings (then called the hanging committee), sharing the role with Martin Gardner. I had been interested in journals and the role of statisticians in peer review, and had more than once already expressed concern about the poor quality of statistics in published journal articles. Martin and I (and others) had developed a checklist to help statistical peer reviewers, and Martin led a pilot study to evaluate it. He presented the findings at the 1989 congress, so we travelled across together.
From the beginning the force behind the PRCs has been Drummond Rennie. With the support of JAMA and the AMA he published an advance notice nearly three years before the first meeting highlighting the lack of evidence behind editorial policies and peer review in particular. He urged people to conduct research to present at the meeting. Fifty abstracts were submitted in 1989, with a steady increase over time to 260 in 2017. This year’s congress was the biggest so far with about 600 delegates.
For a round up of the whole event, see Hilda Bastian’s blog, or check out the running commentary on #PRC8.
How can reporting quality interfere with reproducibility issues and overall trust in science results? With that question in mind, we participated in the Reproducibility, Replicability and Trust in Science conference organised by the Wellcome Genome Campus from 9 to 11...