What is the little thing you can do to increase reproducibility, replicability and trust in science?05/10/2020
How can reporting quality interfere with reproducibility issues and overall trust in science results? With that question in mind, we participated in the Reproducibility, Replicability and Trust in Science conference organised by the Wellcome Genome Campus from 9 to 11 September (2020). Representing the UK EQUATOR Centre, we discussed sensitive new issues raised by participants and also other issues that have been known about for a long time, with no solution found yet.
The conference was entirely held online, in a format that allowed participants to ask questions during the presentations, through a chatbox, and also keep in contact after the sessions through a Slack channel specially created for the event. Posters were available with nice one-minute videos attached to them so that users could feel like they were face-to-face with the presenters.
The event brought together representatives from research institutions and communities (including meta-researchers), publishers, funders people organising large databanks, preprints and registration services, as well as the European Commission. The organisers took care of inviting presenters from all over the world, joining researchers from regions with different resources and realities. It was nice to see great participation of women in both the conference board and attendees’ groups.
A few weeks after the conference, it is time to think about the messages and lessons from this event and prepare for the next — which the organisers intend to make happen again in two years.
The main message from the conference was that we need to embrace the uncertainty that is inborn in science, not be afraid or ashamed of it, not treat it as a bad thing. We need to come to terms with it. Part of the reproducibility of science depends on researchers being open about what we know and what, even after careful research, we still don’t know. This allows colleagues to start investigations based on realistic assumptions, hypotheses and pieces of evidence.
Also, by embracing uncertainty and communicating it well, we educate the press about its existence, and about what can and cannot be disseminated as true. Society needs to know about how taxpayer money is used in research, and the results of it, but on the other hand, science is complex, and the public needs to understand that. Dealing with this complexity involves being honest about the areas of shadow.
A broader perspective into a research question — or results — allows researchers and the public to understand what is already known and what can only be guessed. Or not even that.
Editorial policies and retractions
From the readers perspective, the “storytelling” about a research report begins on the editor’s desk, when they decide what to publish or not, and from where they have the opportunity to take steps to improve reporting. Much was discussed about the fate of negative results, which risk remaining unpublished. Strategies to break the obstacles to publish them are protocol publication, deposits to preprints, registered reports, post-publication peer review (just to name some), and the Octopus project (a database for researchers to register every single phase or procedure of the research project). The initiatives to increase the publication of negative results are increasing, as in a current “self-cleaning” movement in science. However, the problems persist, and researchers are increasingly turning to preprints as a way to put pressure on editors.
What is published in traditional journals is still serving as an indicator for reward and incentives in research institutions: the more you publish (rather than the quality of the work that you publish) the more you advance in your career. However, publishing poor-quality studies can (and should) lead to retractions from the literature. So the attendees discussed with publishers about ways to make the reasons for retractions more transparent: a study withdrawn for a clerical or editorial reason (including errors from publishers) is not a sign of corruption or a reason for stigma — unlike fraud, misconduct, inappropriate authorship and accountability. Retraction reasons should be communicated more clearly to the public.
Peer reviewing, as an editorial task, is part of the process of correcting science. Presenters and attendees discussed how the quality of peer review could be increased. One of the ideas is to look at improving editorial policies, but it seems there is still some resistance from publishers to admit that a few quality controls on manuscripts can be made the responsibility of staff (or hired peer reviewers). They would look, for example, at ethics issues and the adherence to reporting guidelines. At the same time that this would result in extra costs, it was accepted that this would result in more openness and transparency in research publishing.
The science ecosystem
The eroded trust in science and scientists, shown by many surveys, have prompted a movement of self-correction and auto-cleaning, from the era of authority of the past to an age when transparency and integrity are encouraged and even required as essential. However, the culture of “publish or perish” is a force against the improvements, as it favours quantity over quality, and careers in academia depend on this. There is no single solution to the problem: acting on reporting quality only does not solve the flaws in research planning and methods. On the other hand, without good, transparent and open reporting, science cannot progress. The final debates on the conference focused on science as an ecosystem where several things act altogether towards improvements, and where every stakeholder has a responsibility to act: scientists, editors, funders, committees and associations, the public. All should think about what is the little thing they can do today to make a difference.
Researcher at the UK EQUATOR Centre