Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS EveryONE

Open science and cognitive psychology: An interview with Guest Editor Nivedita Mani and Mariella Paul

Nivi is Professor at University of Göttingen, Germany where she leads the “Psychology of Language” research group at the Georg-Elias-Müller Institute for Psychology. Her work examines the factors underlying word learning and recognition in young children and views word learning as the result of a dynamic mutual interaction between the environment and the learner. She is also one of the Guest Editors of an ongoing PLOS ONE Call for Papers in developmental cognitive psychology in collaboration with the Center for Open Science. This Call has a particular emphasis on reproducibility, transparency in reporting, and pre-registration.

Prof. Dr. Nivedita Mani

Mariella is a postdoctoral researcher in Nivi’s department. She is interested in how children’s interests shape their word learning, which she investigates using several methods, including EEG, online studies, and meta-analytic approaches. Mariella was one of the co-founders of the Open Science initiative at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, where she did her PhD, and was awarded an eLife Community Ambassadorship to promote open science.

Mariella Paul

I asked them about their views on how open science affects and shapes their research and their field.

Can you tell me about your interest in open science?

MP: The first time I heard about open science and the replication crisis was during a conference I attended during my Master’s, but I only really got into it during my PhD, when I learned much more about it through academic Twitter and started to apply it to my own research. I think the ideas around open science appealed to me as a (then very) early career researcher (ECR) because they were how I, perhaps idealistically, thought science should be done. I have heard the same sentiment from bachelor’s (or undergrad) students when giving lectures about open science practices: “Why wasn’t it always done like this?”. After learning bits and pieces from Twitter and podcasts, such as ReproducibiliTea and the Black Goat, I got in touch with other ECRs at my institute and we founded an open science initiative, organized workshops for our colleagues and ourselves to learn more about open science, and eventually even started our own ReproducibiliTea journal club, where we read and discuss papers about different open science practices.

NM: My interest in open science is relatively recent. I am quite late to the party and my invitation is by virtue of the people in my lab who keep finding better ways to do science. My interest is driven by the fact that the small steps towards transparency and best practice that we take in successive projects not only makes us more confident of the results we report but also makes us calmer in planning projects. What I find interesting and quite marvelous actually, is that this trend towards greater transparency in research and reporting is being spearheaded by young researchers. That’s really amazing to me, because, as a tenured Professor, that next publication – and lingering difficulties associated with publishing null results – is not going to impact my next paycheck but it might well impact the future prospects of the young researchers who are leading this change, who nevertheless weigh doing science well equally with getting cool results!    

How does transparency in reporting affect your own research?

MP: My PhD consisted largely of conceptual replications, that is, I replicated studies previously done with infants and adults with young children. Directly building on previous studies has clearly illustrated the need for transparent reporting for me – because only with transparent reporting and shared materials one can hope to conduct a close replication. Therefore, for my own research, I aim to report my methods as transparently as possible, to make the lives of future researchers wanting to run replications or meta-analyses easier.

NM: I think the best thing to say for it is that it is frees you. There is, on the one hand, more acceptance these days for the publication of null results, but also, more importantly, greater appreciation for the scientific process rather than the scientific result. This makes it a much more relaxing climate to be a researcher in, since you don’t need to find that perfect result, you need only to document that you went about looking for evidence of that effect in an appropriate manner. This makes you more conscious of critically evaluating your methods prior to testing while leaving you rather calm about the result of your manipulation. So for instance in my group, we now routinely write up the Introduction, Methods and Planned analyses of a paper before we start testing. This makes us think much more about what it is we are actually testing, what we plan to analyze, whether we can conduct the analyses we hope to, and whether that analyses actually tests the hypotheses under consideration. I think this way of planning studies not only makes us methodologically rigorous but also makes us more likely to actually find meaningful effects.

Why do you think pre-registration matters in developmental cognitive psychology?

MP: I think pre-registration can be valuable for any confirmatory study, by adding transparency early during the research process, and by decreasing researchers’ analytic flexibility. In developmental cognitive psychology in particular, we deal with unique issues. For example, when working with infants and young children, data collection and drop-outs require special attention. Pre-registration can help us set some of the parameters around these issues beforehand, for example by pre-specifying transparent data-peeking and planning a correction for sequential testing. I work a lot with EEG, where we additionally have a myriad of analytic decisions to make in how to preprocess the data. Also here, pre-registration can decrease researchers’ analytic flexibility and reduce bias by making these decisions before seeing the data.

NM: Developmental research is plagued with many of the issues in cognitive science, unfortunately amplified by difficulties with regards to access to participant pools (babies are more difficult to recruit relative to undergraduate students) and resulting issues in sample size, shorter attention spans of participants (leading to shorter and less well-powered experiments) as well as greater variance in infant responding. Thinking more carefully about the study and what you actually have adequate power to do – as one is forced to with a preregistration – may help us avoid costly mistakes of running under-powered studies that eventually lead to inconclusive results. From a pragmatic point of view, preregistration, in particular, helps us to better motivate analyses choices that may be questioned later in the process – so in a recent review of a paper, we were asked why we chose a particular exclusion criterion. We did not preregister this analysis (it’s a relatively old study that is only now seeing the light of day) but based this exclusion criterion on previous work – had we preregistered this, it would have been easier for us to justify our choice of this particular exclusion criterion. As it stands now, I can see that a skeptical reviewer may be inclined to believe our choice of this exclusion criterion is post-hoc.

How does the field of developmental cognitive psychology differ now compared to 10-15 years ago, and has open science played a role in that?

MP: I have only been in the field for a few years, but even in that time, I think open science has played a role in the development of the field. For example, large-scale replication efforts such as the ManyBabies project help us better understand central findings in our field, such as infants’ preference for speech presented in a child-directed manner. Similarly, platforms such as Wordbank – an open database of children’s vocabulary – and MetaLab –an interactive tool for meta-analysis in cognitive development – are now available for everyone to run their own studies on large-scale data.

there is greater acceptance of such “failed” experiments these days and this is to a large extent due to our increased appreciation for the scientific process (including open science practices) rather than the result.

NM: To be really honest, on a personal level, I am rather shamefaced about the practices that I believed acceptable 10 years ago. For instance, 10 years ago, I posted on social media that my “failed” experiments folder was 1.5 times larger than my “successful” experiments folder. Back then, it didn’t occur to me that the failed experiments folder (null results to be precise) was as important as the published successful experiments folder – and indeed, they were not failures, because they were providing us valuable information about potential contexts in which we do not find evidence for particular effects. However, now, there is greater acceptance of such “failed” experiments these days and this is to a large extent due to our increased appreciation for the scientific process (including open science practices) rather than the result. At the same time, there is greater emphasis on correct reporting of results, which I belatedly realize, I have been on the wrong side of, by not reporting aspects of the analyses that were important to interpretation of the results. I think this is changing too, with greater awareness of what we need to report when it comes to reporting the analyses we perform.  

What do you see as the greatest challenges for the field going forward?

MP: I think with the current development of the field and psychology in general, there are many challenges as well as opportunities. For many, including myself, one of the most direct challenges recently has been the restrictions on data collection due to the pandemic. With studies in the lab, as we know them, not having been possible (or only to a very limited degree) for over half a year now, many projects needed to be delayed, and we have been forced to rethink our way of planning new experiments. However, this unique situation also offers the possibility to conduct studies that we perhaps usually would not have thought of. For example, meta-analyses of previous studies in the literature can be conducted even when the lab is closed, and so can online-studies, of course. Also, the time away from the lab can be used to get started on new open science practices. For example, a registered report can be written and submitted so that the stage 1 protocol [i.e., a Registered Report Protocol at PLOS ONE] is already accepted by the time testing can be resumed.

NM: We seem to have achieved greater understanding of the requirements of good science, but I do worry about the extent to which we can implement these requirements. How can we run well-powered studies in developmental research, given restrictions on access to population pools and infant attention span? Cross-laboratory efforts (like the ManyBabies projects or a recent project on the effect of the Covid-19 lockdown on language development that I am involved in) here may be the way forward, allowing us to pool resources across laboratories. Equally, we are looking more deeply into sequential Bayesian designs, that may potentially allow us to get around some of the problems I have mentioned (sample size, power, inconclusive results). In general, I think we need to get more inventive about how to continue doing good developmental research.

At the same time, I don’t know if we really know how to analyze our data. In asking the more critical questions that the field is asking these days, I don’t really see one correct answer – and unfortunately, I don’t feel qualified to choose one answer over another. Again, I think greater transparency in research reporting helps here, because I get to post my data and my analyses and the results that I obtained with these analyses. This allows someone else to look through my data and analyze it differently to see if the pattern holds. Having said that, I don’t also think we are where we could be with regards to this solution – at least, I know my group isn’t – with regards to how well we archive our data and how transparent it is for others to use. That is definitely going to be one of the challenges we will face going forward.

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Related Posts
Back to top