Reference no: EM133723005
1. Read the article "Registered Reports in Child Development: Introduction to the Special Section".
2. Think. Pay close attention to the "problems" that Registered Reports are designed to solve. Think about these problems from the perspective of a caregiver, clinician, or policymaker. What seem like the two biggest problems that non-scientists would care about, and do you think they would be convinced that "Registered Reports" solve these problems (or at least make these problems less severe)?
3. Write. Write -a page, single-spaced memo, that summarizes the thinking you did in Step 2.
Your memo should communicate - without jargon or specialized language - two problems that Registered Reports are designed to solve, and your critique(s) about whether Registered Reports do in fact solve them (be sure to convey why or why not).
read this article
Over the last 10 years, it has become clear that business as usual will not suffice to build a reliable developmental science. Numerous solutions have been proposed, including preregistering studies, increasing sample sizes, and improving our statistical inferences.
This Special Section on Child Development highlights a proposed reform, the adoption of Registered Reports, which is a novel, results-independent publishing format. RRs incentivize scholars to invest in strong, informative studies, in contrast to current criteria, which incentivize researchers to produce novel, statistically significant results.
This article introduces the Special Section on Registered Reports in Child Development, which will be a standard publishing option at Child Development, effective immediately.
There are many questions about the suitability of the format, so we describe it, explain how we conceptualized and executed the Special Section, and provide recommendations for how developmental researchers can optimally use the RR format in their own work.
Broad awareness of the problems in modal research practice came about through the "replication crisis," which was sparked by a series of high-profile failures to replicate past findings in social and cognitive psychology. The issues involved, however, go far beyond replication and extend significantly beyond those particular subfields.
The discovery of fraudulent research conducted by Diederik Stapel raised questions about the rate of fraud in psychological research, and thus why the research record may not be as reliable as we presume.
Bishop (2019) introduced the framework of the "four horsemen" of the replication crisis, consisting of p-hacking, low statistical power, HARKing (hypothesizing after the results are known), and publication bias.
Publication bias occurs when researchers engage in p-hacking with underpowered studies and then write the articles as though the results generated were those anticipated to begin with.
Understanding publication bias is crucial for understanding other problematic research practices, since scientific journals traditionally base their publication decisions on the nature of the results reported in a manuscript. This selection approach is problematic in two ways.
Even within the limited category of statistically significant results, journals are biased toward particular kinds of results. This bias can be explicit or implicit, and can be due to perceived strength of the existing knowledge or prevailing sociocultural beliefs.
Journals send a clear signal to researchers about the type of work that is valuable and fit for publication, which can lead to authors choosing to relegate some findings to the "file drawer".
RRs were developed as an alternative publication format meant to correct problematic practices in traditional publishing.
With RRs, authors submit the first half of the manuscript before collecting their data and/or conducting the analysis. If the manuscript is accepted, the journal commits to publishing the final, completed paper irrespective of the results of the study.
Authors submit a completed report to the journal as part of the Stage 2 review process. The peer review process is focused on whether the authors conducted the study as they indicated they would at Stage 1.
RRs reduce the motivation for researchers to engage in questionable research practices, such as p-hacking, HARKing, or data fabrication, and decrease the rampant publication bias in favor of positive results.
The launch of RRs was accompanied by a series of criticisms and concerns, but over the past 10 years researchers have demonstrated the versatility of RRs, applying the format to secondary data, longitudinal designs, systematic reviews, qualitative methods, mixed methods, and many others.
The RR format was once viewed as a niche format for hypothesis-driven experimental research, but has proven itself to be a broadly applicable publishing model that has the potential to greatly reshape our scientific knowledge.
We conceived of this Special Section in September 2019 because very few developmental journals offered RRs. Now, nearly 4 years later, many developmental journals offer the format, and we hope that our approach and rationale will be helpful to other journals that have not yet introduced the format.
We wanted the resultant publications to look like any other article published in this journal, in terms of substantive focus, methodological approach, and the degree to which the article makes a substantial contribution to the field. To showcase the broad applicability of RRs, we wanted articles to represent the full substantive breadth of research published in developmental science. We used an initial Letter of Intent process to invite 22 proposals for submission of a Stage 1 manuscript.
We used an additional selection criteria when reviewing the LOIs, asking ourselves how the RR format adds value to this study. We selected proposals that showcased the different contexts of application for RRs, such as resolving a controversial question or adding rigor to an analytic method.
The fourth distinctive approach to the Special Section is how we organized the editorial work. The guest editors also served as the action editors. Although authors have little knowledge of or experience with RRs, less often discussed is that the same goes for editors. We served as consultants for existing Associate Editors as they processed manuscripts, learning from one another as we discussed how best to give guidance.
This Special Section consists of eight articles that represent the breadth and depth of developmental research in substantive topic, methodological approach, and target population. They include experimental studies with young children and chimpanzees, assessments of the efficacy of interventions with children and adolescents, and national public data.
The eight articles collectively met the goal of demonstrating the broad applicability of RRs for developmental research, and they all have important and thought-provoking findings, whether those findings are in support of the researcher's hypotheses or not.
RRs are not rigid, and they allow for discovery during the research process. For example, Engelmann et al. (2023) added a fifth preregistered experiment to their Stage 2 manuscript to address questions unaddressed by the other four experiments.
RRs can be used for studies relying on preexisting data, and the Special Section includes two articles with preexisting data of quite different types. The key principle for using preexisting data for RRs is transparency; researchers must be clear about their extent of prior knowledge of the data.
The Special Section articles highlight some use cases for RRs that are particularly beneficial. They include assessing the efficacy of interventions and preventing primary outcome switching.
The Special Section includes two articles that feature primary tests of the efficacy of interventions. The results indicate that the two interventions lead to post-test gains over the control condition for some outcomes, but do not differ from one another.
Ceccon et al. (2023) adapted an ethnic/racial identity intervention developed in the U.S. for the Italian context. Some results were consistent with the previous test of the intervention, and some were not.
Cipriano et al. (2023) examined the efficacy of social-emotional learning interventions via meta-analysis, but the results were less trustworthy than what is currently observed in the literature.
A second beneficial use case for RRs is the ability to provide a fair test of a contemporary debate. Stengelin et al. (2023) tested the theory of puppets by using puppets in different contexts and capacities, and found that they may be reasonable proxies for some capacities but not others.
Cimpian et al.'s article takes on the important question of mischievous reporting in national surveys, and shows how RRs can be put to good use beyond the confines of a hypothesis-testing framework.
The final observation on the content of the Special Section is perhaps the most important, as it is related to the core function of RRs: to reduce bias in the published literature. All of the articles included null results for primary tests.
Fish et al.'s (2023) article found no evidence for gender differences in how competitive and collaborative math learning games were related to math performance, and the one gender difference they did observe indicated that boys underperformed after playing competitive games.
As we reviewed LOIs, a number of key themes emerged, including the need to identify a clear controversy or gap that their work filled, and to provide a compelling rationale for the importance of the results.
Projects that fit well with the RR format are designed to provide useful and interpretable findings across multiple patterns of observed data.
Strong proposals articulated a strong sampling plan including a justification of their planned sample size. This justification could take the form of a traditional frequentist power analysis, or it could be via precision analysis or sequential testing methods.
Through this exercise, it became clear that both investigators and journals will have to adapt to the opportunities offered by RRs, and that editorial guidelines will have to help authors understand expectations for the RR format.
We invited 22 Stage 1 manuscripts, of which five were rejected during review and 17 received IPA. Nine of these manuscripts were not completed in time.
The extended review period of registered reports (RRs) poses a number of challenges for both researchers and journals, including making it difficult to estimate when a Stage 1 report should optimally be submitted, and guaranteeing that a manuscript is seen by the same editorial staff.
Research projects always proceed on an uncertain timeline, but RRs make the publication process more transparent, so we see the broader field of proposed studies and their results.
The previous Editor-in-Chief wanted to encourage the generation of more precise and consistent estimates of the effects of interest to developmental scientists and policymakers, and to scaffold the development of theoretical models that are falsifiable. Registered Reports are a key tool in moving developmental science towards these goals.