Skip to main content
Free AccessEditorial

EJPA Introduces Registered Reports as New Submission Format

Published Online:https://doi.org/10.1027/1015-5759/a000492

What Is a Registered Report?

One ambition among leading scientific journals, and this obviously includes the European Journal of Psychological Assessment, EJPA, is to integrate contemporary developments and important advances in science. Registered Reports (RR) are a relatively new method of manuscript submission that helps to eliminate questionable research practices and aligns scientific values and practices by completing the peer review process before results are known.

RRs work something like this: A group of researchers formulates a hypothesis and writes a detailed introduction and method for a planned study. Before commencing the study, the researchers submit the introduction and method to a journal – in this case EJPA – for peer review. The journal editors send the manuscript to reviewers who provide feedback on the proposed but not yet conducted study. The authors might then be given the opportunity to revise their introduction and method for further consideration. Manuscripts that meet good scientific standards are accepted for publication before the study has been conducted. Provided the study is then completed to the exact specifications described, the article is published regardless of the statistical significance of the findings.

Why Are RRs Important for Science?

In the current scientific climate, novel and positive results are considered more publishable than replication studies and negative results. This creates incentives for researchers to avoid replication studies and to engage in research practices that inflate the likelihood of false-positive results (Type I errors). RRs help to protect against questionable research practices such as p-hacking (selectively reporting data and analyses; Simmons, Nelson, & Simonsohn, 2011) and HARKing (hypothesizing after the results are known; Kerr, 1998). P-hacking in particular has been described as a major threat to the validity of all empirical research that relies on hypothesis testing (Nelson, Simmons, & Simonsohn, 2018) and can raise false-positive rates from 5% to well over 60% (Simmons et al., 2011). Preregistration is considered the best method to protect against p-hacking and the only way for authors to convincingly demonstrate to their audience that their analyses were not p-hacked (Nelson et al., 2018).

RRs also help to protect authors from questionable review practices that contribute to publication bias (i.e., publication decisions informed by results). That is, when the results of a study are known, the evaluation of study quality might be influenced by preexisting beliefs. In the RR format, the study design is the only criteria available for critique meaning that reviewers’ and journal editors’ motivation can only be to ensure that the research design provides a fair test of study hypotheses. Moreover, safe in the knowledge that their work has been provisionally accepted for publication, the authors can then perform the research without the burden that the results might eventually determine the article’s publication. Readers can also be more confident that the research is reproducible given that the study was peer-reviewed independent of its results (Nosek & Lakens, 2014).

Despite the obvious appeal of RRs for science, one concern that is often raised about this format is that it might hinder data exploration. This is not the case. Once data is available, authors are encouraged to explore the dataset and, if they wish so, to report additional exploratory analyses. Preregistration simply communicates to readers which analyses are exploratory and which are confirmatory. This is important because only for confirmatory analyses are common statistical tests valid (Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012). For clearly labelled exploratory analyses, readers know to interpret significance values more cautiously. Another concern about the RR format is that it might not apply to specific fields or sub-disciplines. However, the format is appropriate for any hypothesis driven science that suffers from publication bias, p-hacking, HARKing, low statistical power, or a lack of replication studies (Chambers, Feredoes, Muthukumaraswamy, & Etchells, 2014) including psychological assessment and, thus, the kind of work interesting to EJPA.

Why Are They Important for the Field of Assessment?

Psychological assessment research typically involves hypothesis testing and is therefore susceptible to the same questionable research practices that plague experimental research. However, psychological assessment research differs somewhat from standard experimental studies in that there will not always be a dichotomous accept/reject of hypotheses. Rather, the quality of an assessment tool could be excellent, somewhat good, acceptable, rather poor, or unacceptable. Please note that this does not pose a problem for the RR format. To provide an example, a researcher might hypothesize that the factor structure of a commonly used questionnaire is not valid in a particular population (null hypothesis) and outline a factor-analytic study to test this hypothesis. In the RR format, the researcher simply needs to state beforehand the criteria that will be used to formulate conclusions. Table 1 provides some initial example criteria that researchers might use in order to test their hypotheses (but cf. Greiff & Heene, 2017, for some comments on model fit). If multiple criteria are being used, the researcher might state that an average of two or more indexes will determine final conclusions on scale merit.

Table 1 Example cut-off values for a-priori hypothesis testing in psychological assessment

The field of psychological assessment is also susceptible to reviewer bias. To provide an example of this type of bias, just think of a failed validation study for a well-established scale with unexpected patterns of results. This study might be difficult to publish for authors in a standard format because of the negative results. After several attempts to publish, the study – which contains important information – is moved into the file drawer and will never see the light of publication. RRs mitigate this problem. In fact, this is one of the core reasons why we consider RRs a valuable addition to the already available submission formats at EJPA and we welcome RR submissions that test the validity and reliability of some of the widely used assessment tools in psychological science.

What Are the Specific Implications for Authors?1

For authors interested in RRs at EJPA, the submission guidelines are available on the journal’s website (i.e., Instructions to Authors, https://www.hogrefe.com/j/ejpa). To briefly review the guidelines here, the review process takes place in two stages. At Stage 1, the manuscript is submitted through the online submission system and must contain an introductory section that provides background and the specific hypotheses to be tested. Importantly, Stage 1 submissions must also contain a method section that provides a detailed description of the data analysis procedures that will allow for direct replication. To ensure that the results are informative regardless of outcome (i.e., keeping both Type-I and Type-II error probabilities low), high statistical power or precision will be required and should be addressed at this stage. Rules for data elimination (e.g., plans to remove outliers) must also be specified a priori.

Editorial decisions about in-principle acceptance are based on the (peer-reviewed and, if needed, revised) research proposal and are therefore outcome-independent. The Stage 1 submission should also include an anticipated timeline for the research and, along this line of thinking, the anticipated date for the submission of the Stage 2 manuscript. RRs can be submitted in all three formats in EJPA (i.e., brief reports, original articles, and multi-study reports). The word limit is relevant for the Stage 2 submission and so authors might aim to complete the Stage 1 submission in about half to two thirds of the word count available.

If the RR is accepted at Stage 1 (possibly after some rounds of revision), the accepted protocol must be registered by the authors in a recognized repository (either publicly or under embargo until Stage 2) and the research must be conducted according to the protocol. The Stage 2 revision should contain the same introduction and method sections as the Stage 1 submission, plus the new results and discussion sections – and this is where the general word limits for EJPA apply. Additional post-hoc analyses can be included but must be clearly distinguishable from the registered analyses.

All Stage 2 submissions will be evaluated with regard to their adherence to the accepted Stage 1 protocol. In fact, the in-principle acceptance at Stage 1 guarantees the publication of some version of the manuscript, provided that the study and analyses are conducted as proposed. However, Stage 2 submissions may be subject to one or more rounds of revisions in order to ensure that the results and discussion sections provide adequate detail.

Is There More to Come?

EJPA is the first assessment journal to introduce RR submissions. The decision to offer authors the opportunity to submit RRs is part of an overarching strategy to increase transparency and accountability. For about one year, authors have been asked to provide their data codes and their outputs in electronic format once their paper is accepted (Greiff, 2017). RRs are the next logical step towards improving research transparency and accountability. In addition to this, EJPA’s editorial team is currently working hard to integrate additional aspects of open science – stay tuned for updates in the next editorials. Being the first assessment journal that allows for RR submissions, we hope that authors will make ample use of this new format at EJPA.

1The following paragraphs are based on EJPA’s submission guidelines for Registered Reports, which in turn have been adopted from other Hogrefe journals.

References

  • Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D. & Etchells, P. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1, 4–17. https://doi.org/10.3934/Neuroscience2014.1.4 First citation in articleCrossrefGoogle Scholar

  • Greiff, S. (2017). The field of psychological assessment. Where it stands and where it’s going. A personal analysis of foci, gaps, and implications for EJPA. European Journal of Psychological Assessment, 33, 1–4. https://doi.org/10.1027/1015-5759/a000412 First citation in articleLinkGoogle Scholar

  • Greiff, S. & Heene, M. (2017). Why psychological assessment needs to start worrying about model fit. European Journal of Psychological Assessment, 33, 313–317. https://doi.org/10.1027/1015-5759/a000450 First citation in articleLinkGoogle Scholar

  • Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196–217. https://doi.org/10.1207%2Fs15327957pspr0203_4 First citation in articleCrossrefGoogle Scholar

  • Nelson, L. D., Simmons, J. & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69, 511–534. https://doi.org/10.1146/annurev-psych-122216-011836 First citation in articleCrossrefGoogle Scholar

  • Nosek, B. A. & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results. Social Psychology, 45, 137–141. https://doi.org/10.1027/1864-9335/a000192 First citation in articleLinkGoogle Scholar

  • Simmons, J. P., Nelson, L. D. & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. https://doi.org/10.1177%2F0956797611417632 First citation in articleCrossrefGoogle Scholar

  • Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L. & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632–638. https://doi.org/10.1177%2F1745691612463078 First citation in articleCrossrefGoogle Scholar

Samuel Greiff, Institute of Cognitive Science and Assessment (COSA), University of Luxembourg, 11, Porte des Sciences, 4366 Esch sur Alzette, Luxembourg,
Mark S. Allen, School of Psychology, University of Wollongong, Northfields Avenue, Wollongong NSW 2522, Australia,