Publications

Under Review/In Revision

  • Field, S. M., Thompson, J., Penders, B., de Rijcke, S., Munafò, M. R. Exploring the Dimensions of Responsible Research Systems and Cultures: A Scoping Review. Manuscript under review with Royal Society Open Science.

ABSTRACT: The responsible conduct of research is foundational to the production of valid and trustworthy research. Despite this, our grasp of what dimensions responsible research (RR) might contain–and how it differs across disciplines (ie, how it is conceptualized and operationalized)–is tenuous. Moreover, many initiatives related to developing and maintaining RR are developed within disciplinary and institutional silos which naturally limits the benefits that RR practice can have. To this end, we are working to develop a better understanding of how RR is conceived and realized, both across disciplines, and across institutions in Europe. The first step in doing this is to scope existing knowledge on the topic, of which this scoping review is a part. We searched several electronic databases for relevant published and grey literature. An initial sample of 715 articles was identified, with 75 articles included in the final sample for qualitative analysis. We find several dimensions of RR that are underemphasized or are excluded from the well-established World Conferences on Research Integrity Singapore Statement on Research Integrity (WCRI) and explore facets of these dimensions that find special relevance in a range of research disciplines.

  • Field, S. M., van Ravenzwaaij, D., Hoek, J. M., Pittelkow, M.-M., & Derksen, M. Qualitative Open Science - Pain Points and Perspectives. Manuscript under review with the British Journal of Social Psychology.

ABSTRACT: Adopting some practical elements of open science - a movement whose goal is to make scientific research available for everyone - presents unique challenges for qualitative researchers, particularly when it comes to data sharing. In this article, we discuss the issue of open qualitative data, arguing that while concerns about ethics and loss of data quality are legitimate, they do not pose so great a problem as to preclude qualitative researchers from effectively practicing open science. We describe the cost-benefit balance that each qualitative researcher takes into account as they choose whether or not to share their data and highlight the fact that qualitative research practice lends itself to transparency and integrity by its reliance on reflexivity, and other practices such as member checking and using multiple coders. We conclude with a reminder to readers that fruitful open science does not require for one to engage in all possible practices for a given study; only those which are appropriate and feasible.

Published

  • Pittelkow, M.-M., Field, S. M., Isager, P. M., van't Veer, A. E., & van Ravenzwaaij, D. The Process of Replication Target Selection: What to Consider? Royal Society Open Science, 10, 210586.

ABSTRACT: Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic, and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications, and a few formalized suggestions. Here, we propose a study to involve the scientific community in creating a list of considerations generally regarded important by social scientists with regards to replication target selection. We will employ a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, individuals who previously selected a replication target will be surveyed with regards to their considerations. Results from the survey will be incorporated into the preliminary list of considerations. Lastly, the updated list will be sent to a group of individuals, knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we aim to establish consensus regarding what to consider when selecting a replication target.

  • van Ravenzwaaij, D., Bakker, M., Heesen, R., Romero, F., van Dongen, N., Crüwell, S., Field, S. M., Held, L., Munafo, M., Pittelkow, M.-M., Tiokhin, L, Traag, V., van den Akker, O., van't Veer, A., & Wagenmakers, E. -J. (2022). Perspectives on Scientific Error. Royal Society Open Science, 10, 230448.

ABSTRACT: Theoretical arguments and empirical investigations indicate that a high proportion of published findings are false or do not replicate. The current position paper provides a broad perspective on this scientific error, focusing both on reform history and on opportunities for future reform. Talking points are organized along four main themes: methodological reform, statistical reform, publishing reform, and institutional reform. For each of these four themes, we discuss the current state of affairs, existing knowledge gaps, and opportunities for future reform. The resulting agenda should provide a useful resource for ushering in an era that is marked by a lower prevalence of scientific error.

  • Hoek, J.*, Field, S. M.*, de Vries, Y. A., Linde, M., Pittelkow, M.-M., Muradchanian, J., & van Ravenzwaaij, D. (2021). Rethinking Remdesivir for COVID-19: A Bayesian Reanalysis of Trial Findings. PLOS ONE, 16(7): e0255093. *Equal first author

ABSTRACT: Background: Following testing in clinical trials, the use of remdesivir for treatment of COVID- 19 has been authorized for use in parts of the world, including the USA and Europe. Early authorizations were largely based on results from two clinical trials. A third study published by Wang et al. was underpowered and deemed inconclusive. Although regulators have shown an interest in interpreting the Wang et al. study, under a frequentist framework it is difficult to determine if the non-significant finding was caused by a lack of power or by the absence of an effect. Bayesian hypothesis testing does allow for quantification of evidence in favor of the absence of an effect. Findings: Results of our Bayesian reanalysis of the three trials show ambiguous evidence for the primary outcome of clinical improvement and moderate evidence against the secondary outcome of decreased mortality rate. Additional analyses of three studies published after initial marketing approval support these findings. Conclusions: We recommend that regulatory bodies take all available evidence into account for endorsement decisions. A Bayesian approach can be beneficial, in particular in case of statistically non-significant results. This is especially pressing when limited clinical efficacy data is available.

ABSTRACT: In the replication crisis in Psychology a 'tone debate' has developed. It concerns the question of how to conduct scientific debate effectively and ethically. How should scientists give critique without unnecessarily damaging relations? The increasing use of Facebook and Twitter by researchers has made this issue especially pressing, as these social technologies have greatly expanded the possibilities for conversation between academics, but there is little formal control over the debate. In this paper we show that psychologists have tried to solve this issue with various codes of conduct, with an appeal to virtues such as humility, and with practices of self-transformation. We also show that the polemical style of debate, popular in many scientific communities, is itself being questioned by psychologists. Following Shapin and Schaffer's analysis of the ethics of Robert Boyle's experimental philosophy in the 17th century, we trace the connections between knowledge, social order, and subjectivity as they are debated and revised by present-day psychologists.

  • Field, S.M., Derksen, M. (2020). Experimenter as automaton; experimenter as human: Exploring the position of the researcher in scientific research. European Journal of Philosophy of Science (11), 11, 1-21.

​​ABSTRACT: The crisis of confidence in the social sciences has many corollaries which impact our research practices. One of these is a push towards maximal and mechanical objectivity in quantitative research. This stance is reinforced by major journals and academic institutions that subtly yet certainly link objectivity with integrity and rigor. The converse implication of this may be an association between subjectivity and low quality. Subjectivity is one of qualitative methodology’s best assets, however. In qualitative methodology, that subjectivity is often given voice through reflexivity. It is used to better understand our own role within the research process, and is a means through which the researcher may oversee how they influence their research. Given that the actions of researchers have led to the poor reproducibility characterising the crisis of confidence, it is worthwhile to consider whether reflexivity can help improve the validity of research findings in quantitative psychology. In this report, we describe a combination approach of research: the data of a series of interviews helps us elucidate the link between reflexive practice and quality of research, through the eyes of practicing academics. Through our exploration of the position of the researcher in their research, we shed light on how the reflections of the researcher can impact the quality of their research findings, in the context of the current crisis of confidence. The validity of these findings is tempered, however, by limitations to the sample, and we advise caution on the part of our audience in their reading of our conclusions.

  • Field, S. M., Wagenmakers, E. -J., Hoekstra, R., Kiers, H. A. L., Ernst, A. F., & van Ravenzwaaij, D. (2020). The Effect of Preregistration on Trust in Empirical Research Findings: Results of a Registered Report. Royal Society Open Science (7), 181351.

ABSTRACT:​ The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher’s familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75

  • Field, S. M., Hoekstra, R., Bringmann, L., & van Ravenzwaaij, D. (2019). When and Why to Replicate: As Easy as 1, 2, 3?. ​Collabra: Psychology, 5(1), 46. PDF and OSF. This paper won the Heyman's Snijders-Kouwer prize for best PhD student paper published in 2019.

​ABSTRACT:​ The crisis of confidence in psychology has prompted vigorous and persistent debate in the scientific community concerning the veracity of the findings of psychological experiments. This discussion has led to changes in psychology’s approach to research, and several new initiatives have been developed, many with the aim of improving our findings. One key advancement is the marked increase in the number of replication studies conducted. We argue that while it is important to conduct replications as part of regular research protocol, it is neither efficient nor useful to replicate results at random. We recommend adopting a methodical approach toward the selection of replication targets to maximize the impact of the outcomes of those replications, and minimize waste of scarce resources. In the current study, we demonstrate how a Bayesian re–analysis of existing research findings followed by a simple qualitative assessment process can drive the selection of the best candidate article for replication.​

  • Field, S. M., Wagenmakers, E.-J., Newell, B. R., Zeelenberg, R., & van Ravenzwaaij, D. (2016). Two Bayesian tests of the GLOMOsys Model. Journal of Experimental Psychology: General, 145(12), e81-e95.

ABSTRACT:​ Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Foerster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1 (N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N =908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants’ mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMOsys model.

Editorial & Reviewing

I am an action editor for Collabra: Psychology, and the metascience section editor of the Journal of Trial and Error (information about the special issue here).

I have reviewed scholarly articles for:

  • Meta-Psychology

  • Royal Society Open Science

  • European Journal for Philosophy of Science

  • Journal of European Psychology Students

  • Behavior Research Methods

  • Language Learning

I review proposals/preprints for:

  • Collaborative Replications Education Project (CREP)

  • Psychological Science Accelerator (methods reviewer)

  • PCI Meta-Research​​ (Recommender)