by Maike Dahrendorf

Last month, BITSS – the Berkley Institute for Transparency in Social Sciences – held its annual Research Transparency and Reproducibility Training (RT2) in Washington D.C. To some surprise, I found myself among the forty-or-so participants walking into a meeting room on a Wednesday morning. Fueled with coffee and pastries, all of us were eager to see what the workshop would bring and how we could fit into the world of research transparency; or better – how it could fit into ours.
The Berkley Institute for Transparency in Social Sciences, established in 2012 by the Centre of Global Action (CEGA), aims to promote and strengthen transparency, reproducibility and openness within social science research and policy-making. At least once a year, the team organizes this 3-day workshop which is designed to provide researchers of all career stages with tools and practices for transparent and open research.
Attending the RT2 was an incredible opportunity for a young researcher like me. I knew little of what to expect: Who else would be there? Did they choose me by accident? Was I too junior to attend if I wasn’t part of a PhD program yet? What would I learn? And – most importantly – how could I best apply these new things to my own work and to create new content for SIOS events? Slightly nervous, I masked my jet-lag with more coffee and took out my notebook.

Scientific discovery is largely influenced by the ethical values that we hold. As young students we are taught that science is an objective measure of reality – that we discover true paradigms and cumulate evidence to better understand the world around us. Ted Miguel’s introduction (1) to the RT2 reminded us of the ideals in which science is rooted. First and foremost, research should be ‘impersonal’ – free of discrimination. Science should be open for everyone – regardless of race, ethnicity or gender. Similarly, scientific discovery should be motivated by finding the truth through objective measures and not be driven by personal gain. Ultimately, scientific discovery belongs to the community and should be openly accessible – not least because results should be verifiable by other researchers. While these ‘Mertonian norms’ seem intuitive and ‘right’, everyone who has spent some time within an academic realm recognizes these norms are not always met. The RT2 aimed specifically to provide us with more tools and knowledge to counter and recognize common ‘closed’ or ‘biased’ practices within our fields.
The core of the workshop was the research cycle: the
“RT2 Roadmap”.

It covers the planning stages of a project, the data management and analysis, and the final dissemination of the results. At each of these stages, a researcher can take steps to minimize their own biases (e.g., pre-registration(2)), improve replicability (e.g., version control) and work against publication of only significant findings (e.g. Registered Reports(3)). The RT2 included various talks and hands-on sessions to familiarize ourselves with these practices and for each of us to see how they fit into our own workflow. To not bore you with a reiteration of each session, I simply want to put forward what left the greatest impression on me.
That other social sciences have different problems.
This might seem obvious and, had you asked me a few weeks ago, I would wonder myself how this could be the main lesson I took away from RT2. As a psychologist (in-training), I have come across the ‘replication crisis’ many times. I know about common issues within the field, I am familiar with open practices and, in my opinion, there should be no reason why research should not always be as open as possible. But, admittedly, I had spent little time thinking about research in fields other than psychology. When I read the attendant list for RT2, I was a surprised to find only 3-4 psychologists amongst around 40 attendees. Ultimately, for me, the most valuable skill I learned from the short 3 days is to look at ‘Open Science’ and transparency from a different angle – not only from my own field.
As someone passionate and interested in the Open Science movement, I would say that Psychology itself has made great progress in terms of openness. Undoubtedly, there is still plenty to space to grow, but I feel psychology has realized many of its problems and more researchers are taking steps towards transparency – which can be seen in the adaptation of of pre-registrations and journals adapting more open guidelines (4). And yet, talking to other researchers from different fields like economics, public policy or political sciences, has shown me that – while the social sciences have some problems in common (e.g., publication bias of significant findings) – there are some distinct differences in the problems each field faces. Psychology is often very experimental; we create hypotheses, test them, and update our knowledge. Our data is collected specifically for a set of studies and with pre-specified hypotheses (hopefully…). We collect data for scientific and often experimental purposes. And this is where I found myself surprised to see that other fields use completely different data. Again, this should not be surprising, but coming from a narrow background, for me it was. Many of the other social scientists talked about how they mostly use secondary and observational data from giant databases and large-scale public surveys – data that was collected before a hypothesis was formulated because the very nature of the data was not experimental. The question of how to write a pre-registration suddenly becomes a whole lot more difficult. Can you even pre-register your hypotheses and analyses, when you have already worked with the dataset? And even if you haven’t, how do you prove that you haven’t peeked at the data before? These are issues that I simply hadn’t come across before. Similarly, the issue of sharing your data openly. Some public datasets are openly available (great!), but some aren’t – even if the research had been paid for by taxpayers. In this instance, the social sciences are united. Whether a political scientists, a psychologist or economist, many researchers like to ‘hoard’ their own data; after all, it was them who collected the data. Why share it with others? This, I believe, is when people do not understand a core value of scientific discovery. In my opinion, any research we produce does not belong to us. It belongs to the community, to fellow academics, to the general public and not to journals or individual scientists. And it was sad to see that clinging on to one’s own data is still the reality in most social sciences, including psychology. Of course, not all data can easily be anonymized, and some are extremely sensitive. Yet, if the intent to share is present, one can find a way – even if its only a portion of the data.
Getting back to the actual story, the RT2 team gave us plenty of hands-on workshops to try and practice research transparency – through data-deidentification, creating pre-analysis plans and learning Git version control. The version control is something that I will especially try and incorporate into my workflow (maybe this will put an end to my folder structure – which currently include “ReportResults1”, “ReportResults1.1”, “ReportResults2.0”, “ReportResultsFeedbackMD” …). The importance of an organised system and workflow is obvious. And, in all honesty, I believe this should be taught to students before they jump onto any project. Accidentally deleting other’s code or overwriting parts of documents cannot only happen in student groups. In a perfect world, everyone would use the same software to control the versions and reduce these errors. At RT2, we had a long discussion about the problem that, while these skills are great and useful, many of them are hard to establish in collaboration projects. Realistically, hardly any of my fellow students (and supervisors) use Git. The only thing we can do, at this point, is to encourage others to try it by showing them easily accessible resources and offer our help. And, hopefully, sometime in the future, the accidentally deleted code will be found in a previous version of our document.
My final take on the 3-day workshop was that I am incredibly fortunate. Without realising, I chose a university (and department) that is a forerunner in many things “Open Science”. We have many passionate staff members that teach on Open Science and encourage students to think critically about these matters. Even supervisors that might not have pre-registered or shared data before, are open and encouraging when students raise these issues. Our own student initiative has received enormous support from all fronts, and it is incredible to see all the interested students coming to our events and asking questions. However, the RT2 reminded me that Amsterdam can sometimes feel like an “Open Science” bubble – that it can easily be forgotten that this openness is not present in every psychology department. Talking to other RT2 participants and friends from other universities, I get reminded that there is still a lot to do to change the current approach to research. And with our work with SIOS we can hopefully empower a new generation of scientists to follow open principles and to make transparency the norm.
Finally, I would like to thank the entire BITSS team for making this workshop an incredible and fun experience, and for supporting students like me in becoming better researchers.
Additonal Resources:
The RT2 OSF page with all resources & slides: https://osf.io/3mxrw/
BITSS: https://www.bitss.org/
For Twitter fans: @UCBITSS
Stay Informed!
If you like this post and want to stay up-to-date with our activities and blogposts, you can sign up for our newsletter.