The benefits of "crowdsourced" research

What is crowdsourced research?  
Briefly, “crowdsourced” research involves several individual researchers who coordinate their resources to accomplish goals that would be difficult to achieve individually. Although there are several different ways in which researchers can work collaboratively, the current blog is focusing on projects where several different researchers each collect data that will be pooled together into a common analysis (e.g., the “Many Labs” projects, Ebersole et al., 2016; Klein et al., 2014; Registered Replication Reports [RRR], Cheung et al., 2016; Wagenmakers et al., 2016; “The Pipeline Project,” Schweinsberg et al., 2016).
Below I try to convince you that crowdsourcing is a useful methodological tool for psychological science and ways that you could choose to get involved. 
Eight benefits of crowdsourced research
First, crowdsourced research can help achieve greater statistical power. A major limiting factor for individual researchers is the available sample of participants for a particular study. Commonly, individual researchers do not have access to a large enough pool of participants, or enough resources (e.g., participant compensation) to gain access to a large enough pool of participants, to complete a properly powered study. Or researchers must collect data for a long period of time to obtain their target sample size. Because crowdsourced research projects involve the aggregation of results from many labs, a major benefit is that such projects have resulted in larger sample sizes and more precise effect size estimates than any of the individual labs that contribute to the project. 
Second, crowdsourced research provides information about the robustness of an effect to minor variations in context. Conclusions from any individual instantiation of an effect (e.g., an effect demonstrated in a single study within a single sample at a single point in time) are inevitably overgeneralized when summarized (e.g., Greenwald, Pratkanis, Leippe, & Baumgardner, 1986). That is, any individual study occurs within an idiosyncratic combination of an indefinite amount of contextual variables, most of which are theoretically irrelevant to the effect (e.g., time of day, slight moment-to-moment variations in the temperature of the room, the color of socks the researcher is wearing, what the seventh participant ate for breakfast the Saturday prior to their study appointment, etc.). Thus, a summary of an effect “overgeneralizes” to contexts beyond what was actually present in the study that is being summarized. And it is only when an effect is tested across several levels and combinations of these myriad contextual variables can information be strongly inferred about the theoretically invariant characteristics of the effect (e.g., the effect is observed across a range of researcher sock colors; thus, the observation of an effect is unlikely to depend on any specific color of socks).  
A benefit of crowdsourced research is the results inherently provide information about whether the effect is detectable across several slightly-different permutations and combinations of contextual variables. Consequently, crowdsourced research allows for stronger inferences to be made about the effect across a range of contexts. Notably, even if a crowdsourced research project “merely” uses samples of undergraduate students in artificial laboratory settings, the overall results of the project would still test whether the effect can be obtained across contexts that slightly vary from sample-to-sample and laboratory-to-laboratory. Although this hypothetical project may not exhaustively test the effect across a wide range of samples and conditions, the results from the overall crowdsourced research project will test the robustness of the effect more than the results from any individual sample within the project.
Third, because the goal of most crowdsourced research is the aggregate or synthesis of the results from several different labs who have agreed to combine their results a priori, another benefit is there is unlikely to be inclusion bias (which would be comparable to publication bias among published studies) within the studies that contribute to the project. Consequently, the overall results from crowdsourced research projects are unlikely to suffer from inclusion bias than any comparable synthesis of already-completed research such as a meta-analysis. Rather, crowdsourced research projects involve several studies that provide estimates that vary around a population effect and are unlikely to systematically include or exclude studies based on those studies’ results. The lack of inclusion bias is because individual contributors to a crowdsourced research project do not need to achieve a particular type of result to be included in the overall analysis. Rather, because the overall project hinges on several contributors each successfully executing methods that are comparable, individual contributors have a motivation to adhere to the agreed-upon methods as closely as possible.
Fourth, and related to the points made in the previous paragraph, because the crowdsourced research project involves the coordination of several labs, it is unlikely there would be post-hoc concealment of methodological details, switching of the planned analyses, filedrawering “failed” studies, optional stopping of data collection, etc. without several other contributors knowing about it. This distribution of knowledge likely makes the final project more transparent and documented than a comparable set of non-crowdsourced studies. In other words, it would literally take a conspiracy to alter the methods or to systematically exclude results from contributing labs of a crowdsourced research project. Consequently, because crowdsourced research projects inherently involve the distribution of knowledge across several individuals, it is reasonable for readers to assume that such projects have strongly adhered to a priori methods. 
Fifth, comparisons of the results from the contributing labs can provide information (but not all information) about how consistently each lab executed the methods. Although cross-lab consistency of results is not inherently an indicator of methodological fidelity, any individual lab that found atypical results (e.g., surprisingly strong or surprisingly weak effects), for whatever reason, would be easily noticeable when compared to the other labs in the crowdsourced research project and should be examined more closely.
Sixth, the nature of crowdsourced research projects means that the methods are already in a form that is transferable to other labs. For example, there would already be a survey that has been demonstrated to be understood by participants from several different labs, there would already be methods that are not designed to be dependent on an idiosyncratic physical feature of one lab, there would be an experiment that has been demonstrated to work on several different computers or is housed online where the study is accessible for anybody with internet access, etc. The transferability of methods does not inherently make methods more appropriate for testing a hypothesis, but it does make it easier for other researchers who were not contributors to the original crowdsourced study to replicate the methods in a future study.
Seventh, although there have been calls to minimize the barriers to publishing research (and thus reducing the negative impact of file drawers; e.g., Nosek & Bar-Anan, 2012), some have opined that research psychologists should be leary of the resulting information overload and strive to publish fewer-but-better papers instead (e.g., Nelson, Simmons, & Simonsohn, 2012). Crowdsourced research seems to address both the file drawer problem and the concern about information overload. Take an RRR as an example. Imagine if each individual contributor in the RRR conducted a close replication of a previously-published effect and tried to publish their effect independently of one another. Also, another researcher gathers these studies and publishes a meta-analysis that described the synthesis of each of those studies. I do not believe that each manuscript would necessarily be publishable on its own. And, even in the unlikely event that several manuscripts each describing one close replication were published, there would be a significant degree of overlap between each of those articles (e.g., the Introduction sections presumably would largely cover the same literature, which would be tiresome for readers). Thus, several publications each describing one close replication of an effect are inefficient for journals who would not want to tax editors and reviewers with several articles with a significant amount of overlap, for the researchers who do not need to write manuscripts that are largely redundant with one another (plus, each manuscript is less publishable as a stand-alone description of one replication attempt), and for readers who should not have to slog through several redundant publications. Crowdsourced research projects provide one highly-informative presentation of the results, readers only need to find and read one manuscript, and editors and reviewers only need to evaluate one manuscript. Also, because the one crowdsourced manuscript would include all of the authors, there is no loss in the amount of authors who get a publication. The result is fewer, but better, publications.*  
Finally, researchers at institutions with modest resources can contribute their resources to high-quality research. Thus, crowdsourced research can be more democratic than traditional research. There are hundreds of researchers who have access to resources (e.g., time, participants, etc.) that may be insufficient individually, but could be incredibly powerful collectively. Or there may be researchers who mentor students who need to accomplish a project in a rigid period of time (e.g., a semester or an academic year) who need projects where the hypotheses and materials are “ready to go.” Crowdsourced research projects ensures that scientific contributions do not only come from researchers who have enough resources to be self-sufficient. 
Three ways to get involved
First, stay up-to-date on upcoming opportunities. Check out StudySwap (https://osf.io/view/studyswap/), which is an online platform to facilitate crowdsourced research. Follow StudySwap on Twitter (@Study_Swap) and like StudySwap on facebook (https://www.facebook.com/StudySwapResearchExchange/). Also follow the RRR group (https://www.psychologicalscience.org/publications/replication) and Psi Chi's NICE project (https://osf.io/juupx/) to hear about upcoming projects for you and your students. Crowdsourced research projects only work when there are lots of potential contributors who are aware of opportunities. 
Second, Chris Chartier and I are excited to announce an upcoming Nexus (i.e., special issue) in Collabra:Psychology on crowdsourced research. Although the official announcement will be coming in the near future, we are starting to identify individuals who may be interested in leading a project. This Nexus will involve a Registered Reports format of crowdsourced research projects we colloquially call Collections^2 (pronounced merely as “collections,” but visually denoted as a type of crowdsourced research by the capital C and the exponent). Collections^2 are projects that involve collections, or groups of researchers, who each collect data that will be pooled together into common analyses (get it? data collection done by a collection of researchers = a Collection^2) and is the same as the crowdsourced projects that were discussed above.**
Collections^2 that would qualify for inclusion in the Nexus can be used to answer all sorts of research questions. Here is a non-exhaustive list of the types of Collections^2 that are possible:
  1. Concurrent operational replication Collections^2: Several researchers simultaneously conduct operational replications of a previously-published effect or of a novel (i.e., not previously-published) effect. These projects can test one effect (such as some of the previous RRRs) or can test several effects within the data collection process (such as the ManyLabs projects). 
  2. Concurrent conceptual replication Collections^2: Projects where there is a common hypothesis that will be simultaneously tested at several different sites, but there are several different operationalizations of how the effect will be tested. The to-be-tested effect can either be previously-published or not. These projects would test the conceptual replicability of an effect and whether the effect generalizes across different operationalizations of the key variables. 
  3. Construct-themed Collections^2: Projects where researchers are interested in a common construct (e.g., trait aggression) and several researchers collect data on several outcomes associated with the target construct. This option is ideal for collections of researchers with a loosely common interest (e.g., several researchers who each have an interest in trait aggression, but who each have hypotheses that are specific to their individual research).
  4. Population-themed Collections^2: Projects where contributing researchers have a common interest the population from which participants will be sampled (e.g., vegans, athiests, left-handers, etc.). This sort of a collaboration would be ideal for researchers who study hard-to-recruit populations and want to maximize participants’ time. 
  5. And several other projects that broadly fall under the umbrella of crowdsourced research (There are lots of smart people out there, we are excited to see what people come up with).

This Nexus will be a Registered Reports format. If you are interested in leading a Collection^2 or just want to bounce an idea off of somebody, then feel free to contact Chris or me to discuss the project. At some point in the near future, there will be an official call to submit Collections^2 proposals and lead authors can submit their ideas (they do not need to have all of the contributing labs identified at the point of the proposal). We believe the Registered Reports is especially good for these Collection^2 proposals. Collections^2 include a lot of resources, so we want to avoid any foreseeable mistakes prior to the investment of these resources. And we believe that having an In-Principle Acceptance is critical for the proposing authors to effectively recruit contributing labs to join a Collection^2. 
If you are interested in being the lead author on a Collection^2 for the Collabra:Psychology Nexus you can contact Chris or me. Or keep an eye out for the official call for proposals coming soon. 
Third, if you do not want to lead a project, consider being a contributing lab to a Collection^2 for the Collabra:Psychology Nexus on crowdsourced research. Remember, these Collections^2 will have an In-Principle Acceptance, so studies that are successfully executed will be published. Being a contributor would be ideal for projects that are on a strict timeline (e.g., an honor’s thesis, first-year graduate student projects, etc.). Keep an eye out for announcements and help pass the word along. 

*There is the issue of author order where fewer authors get to be first authors. However, when there are several authors on a manuscript, the emphasis is rightly placed on the effect rather than the individual(s) who produced the effect.

**The general idea of Collections2 has been referred to “crowdsourced research projects,” as we did above, or elsewhere as “concurrent replications” (https://rolfzwaan.blogspot.com/2017/05/concurrent-replication.html). We like the term Collections2 because “crowdsourced research projects” are a more general class of research that does not necessarily require multi-site data collection efforts. We also believe the name “concurrent replications” may imply this is a method that is only used in replication attempts of previously-published effects. Also, the name “concurrent replication” may imply that all researchers use the same variable operationalizations across sites. Although concurrent replications can be several operational replications of a previously-published effect, they are not inherently operational replications of previously-published effects. Thus, we believe that Collections2 are more specific than “crowdsourced research projects” and more flexible than what may be implied by the name “concurrent replication.”  



Comments

Popular Posts