Search

What my retraction taught me - Nature.com

kosongkosonig.blogspot.com

In the middle of the pandemic, I got an e-mail asking whether I had access to data from the experiments behind a paper I’d published in 2014. Three months later, I requested that the paper be retracted. The experience has not left me bitter: if anything, it brought me back to my original motivation for doing research.

The query was about work I was proud of. My colleagues and I had asked dozens of participants in a brain-imaging experiment to solve a visual task, which was either hard or easy. We wanted to know how distraction affects the processing of irrelevant stimuli. Our results suggested that distraction blurs the representation of images in the brain’s visual cortex, inducing a sort of neural tunnel vision. Exciting stuff, we thought at the time. It turns out that it might have been a statistical artefact.

It helped that I knew the researcher who had raised the alarm — Susanne Stoll, a PhD student with neuroscientist Sam Schwarzkopf at University College London (UCL). At a 2019 conference in Brussels, we had discussed her perplexing results in a project that built on the work I’d done during my PhD, in collaboration with Sam and others.

Susanne and her co-workers never treated me as a suspect, but as a colleague in the same boat. We all wanted to know what on earth was going on with her unexpected results. They told me how a problem with the analysis might have affected my study (and possibly many others). It involved regression towards the mean — when noisy data are measured repeatedly, values that at first look extreme become less so. I was sceptical. After all, the effects in my paper were the opposite: parameter values moved away from the mean.

We set up a video meeting, and decided that Susanne would go through simulations, and I would go through my old data, if I could dig them up. That was a challenge. Only months before, my current university had suffered a cyberattack, and access to my back-up drive was prohibited at first. It would have been easy to tell the others that the data were gone (as happens all too frequently).

But Susanne and Sam wanted to crack the mystery — and that curiosity was contagious. I spent a week piecing together the necessary files and coding a pipeline to reproduce the original findings. To my horror, I also reproduced the problem that Susanne had found. The main issue was that I had used the same data for selection and comparison, a circularity that crops up again and again. That this could be a problem in our particular context didn’t dawn on me and my colleagues — nor on anyone else in the field — before Susanne’s discovery. The resulting biases were very different from the textbook example, and became apparent only through simulations and stress tests of the data.

Suddenly, everything felt much more serious. I immediately drafted a summary of my findings and sent them to my original co-authors, complete with a first draft of a retraction note. I will never forget the reply from my PhD adviser, Geraint Rees at UCL. His e-mail began: “Great that we’ve persisted in attempting to understand our methodology and findings!” He encouraged me to dig deeper and run an unbiased analysis. This showed trends in the direction of our original findings, but these were much less robust than we had thought.

So, we decided to retract. Our retraction notice explains what happened and points to a technical paper led by Susanne, so that others can learn from our mistakes (S. Stoll et al. Preprint at bioRxiv https://doi.org/fqs8; 2020). Over the years, the field of neuroimaging has discovered a number of possible pitfalls, and has changed its practices accordingly. My hope is that we can contribute to this evolution and foster improvements such as sanity checks with simulated data.

But the lessons here go beyond the technical.

I think that most scientists would like to be more critical of their data and conclusions, because they are driven by the simple desire to learn. However, we all face career incentives that punish flagging up mistakes and negative results. So far, my co-authors and I have not experienced repercussions from our retraction, but we were willing to face the risks. As a junior principal investigator without tenure, juggling pandemic home-schooling and remote working, I’m acutely aware of how costly a reanalysis and retraction is in terms of time and CV points. As a student, I was even told never to attempt to replicate before I publish. That is not a career I would want — luckily, my PhD adviser taught me the opposite.

What we need are incentives that foster the openness and curiosity that motivated us to become researchers in the first place. Painting each other as villains, trying to oversell or hide data or embarking on a witch-hunt will only achieve the opposite.

Seeing each other as peers with the common goal of understanding the world is win–win. When I started publishing my data and code in 2017, it was because I knew how much my own research could benefit from others doing the same. That desire to know is what kept Susanne exploring puzzling results, what led me to re-analyse my data and what encouraged our colleagues to support us along the way.

Scientific progress will always involve the detection and correction of errors. Some tenure committees and grant agencies have started asking candidates whether they practise open science. I suggest they add: ‘What have you learnt from your mistakes?’

Let's block ads! (Why?)



"What" - Google News
January 19, 2021 at 09:11PM
https://ift.tt/2KzilFC

What my retraction taught me - Nature.com
"What" - Google News
https://ift.tt/3aVokM1
https://ift.tt/2Wij67R

Bagikan Berita Ini

0 Response to "What my retraction taught me - Nature.com"

Post a Comment

Powered by Blogger.