Skip to main content
Free AccessEditorial

What Makes a Quality Journal?

Published Online:https://doi.org/10.1027/1618-3169/a000426

Psychology is in turmoil. For several years now the academic field has seen a heated discussion, or even debate, about the quality of psychological research. It started with the sensational uncovering of blatant scientific frauds (Levelt Committee, Noort Committee, & Drenth Committee, 2012) and enhanced scrutiny of implausible study findings (Bem, 2011); it continued with systematic studies showing that a substantial proportion of psychological findings are not reproducible (Open Science Collaboration, 2015), discussions of “questionable research practices” (John, Loewenstein, & Prelec, 2012) and of incorrect uses of statistical methods (Simmons, Nelson, & Simonsohn, 2011). Most importantly, it initiated a continuing transformation towards “better” psychological science with improved research methodologies, more transparency, and more reflection on the use of statistical methods (e.g., Asendorpf et al., 2013). It should also be noted that the problems described were in no way unique to, or distinctive of, psychology or subfields of psychology; these problems continue to challenge all behavioral, if not empirical, sciences. Quite to the contrary: Psychology as a science has seized the opportunity to become a role model for behavioral sciences in seriously tackling these issues and improving the transparency, reproducibility, and quality of research findings.

An important factor in the quality assurance of psychological research is the publication process. The best quality control scientific journals have to offer is peer-review. While of course peer-review is not without its flaws, there is no better alternative than letting the most knowledgeable experts on this planet evaluate a research paper. Yet, it is also fair to say that peer review and editorial practices contributed much to the current “replication crisis” in psychology. With increasing competition for media attention, editors and reviewers were more inclined to accept fancy and sensational study findings over less sensational ones. Replication studies and null findings were boring; rather, findings had to be new and original for consideration by the highest-ranking journals of psychology.

Furthermore, academic publishing was increasingly recognized by others to be a profitable business model. With the digitization of the publication process, a new online journal could be set up by literally anyone with a few mouse clicks and without needing the infrastructure or competence of a big publisher house. The consequence was a proliferation of digitized journals that promised fast publications for a payment – most typically without a quality assessment in the form of a rigorous peer-review (for a list of warning signs, see Beall, 2013). Such journals that publish articles for a monetary fee without quality control have been dubbed “predatory journals.” In 2018, a consortium of investigative journalists analyzed 175,000 scientific articles published by five of the world’s largest pseudo-scientific platforms (Alecci, 2018). It found that since 2013, 400,000 scientists worldwide have published articles on these platforms. In Germany alone, more than 5,000 scientists published their articles in predatory journals. It is becoming increasingly clear to the public that this publication model is undermining the trust in science in general – and must be stopped.

If scientific articles can be published in outlets without quality control, how to recognize a high-quality journal? Unfortunately, there is no simple answer to this question. Some believe that scientometrics, such as the “impact factor” (IF) of a journal, is a useful proxy for its quality. The journal IF is the average number of times journal articles published in the past 2 years have been cited (Garfield, 2006). The IF for the journal Experimental Psychology was 1.20 in the year 2017. Does this number indicate that it is a quality journal? Used as a singular measure of a journal’s quality: not necessarily. It is well documented that the IF is influenced by many factors that are unrelated to the scientific quality of a journal’s articles (for overviews see, for example, Brembs, Button, & Munafò, 2013; Seglen, 1997; Vanclay, 2012), such as technicalities (e.g., selection of database, article types and type of discipline, language bias, a highly volatile number of articles); strategic manipulations (e.g., citation misconduct, IF inflation); and conceptual limitations (e.g., unequal distribution of citations, the Matthews effect). It is clear that post-publication citation indices such as the journal IF could be useful for a comparison of journals within a specific (sub)discipline. However, they must be flanked by additional checks and criteria for a fair assessment of a journal’s quality.

What criteria could these be? In the following, we will describe a few principles or standards that in our opinion could be helpful to recognize the quality of a journal. Most important for this editorial, we will explain how well Experimental Psychology fares compared with each standard, so that the reader can form her own opinion about whether this journal is a good one. We will also lay out which measures we, as the incoming editors, have planned to implement or will continue so as to meet the high standards of a quality journal.

Quality Standards

Standard 1: A Good Journal Has a Specialty

A good journal publishes relevant and cutting-edge research on a particular topic. In the optimal case, it epitomizes the best research done in the field. The journal Experimental Psychology has its specialty in experimental research. To quote from the journal’s homepage (http://www.hogrefe.com/j/exppsy): “Experimental Psychology publishes innovative, original, high-quality experimental research. The scope of the journal is defined by experimental methodology and thus papers based on experiments from all areas of psychology are welcome.”

Why should a journal specialize in experimental research? First, it should be noted that experimentation is the “golden standard” of scientific knowledge seeking. Experiments provide insight into cause and effect by systematic investigation of what outcome occurs when a particular factor or variable is manipulated. The design of experimental research should be guided by the max-con-min principle: maximize the systematic variance of the experimental variables under scrutiny; control systematic error variance (or “bias”) induced by confounding variables; and minimize random error variance induced by random variables. In an ideal experiment, all variables are controlled and none are uncontrolled, which makes it easy to tell convincing experiments apart from not so convincing ones. A strong experiment gives great confidence in the inference of a causal relationship among variables (the so-called “internal validity”). In addition, it can arbitrate between competing models and theories by falsification of rival hypotheses. It is thus not surprising that an important criterion for publication is that the experiment makes a substantial contribution to a theoretical research question. A theory is the starting point of experimental research and its end point. It is the starting point because it allows the derivation of hypotheses that are empirically tested with the experiment. It is the end point because the experiment is used to evaluate and to correct the theory. It is this reciprocity with theory updating that gives experimental research a deep meaning.

What does this mean for Experimental Psychology? We plan to highlight the importance of theories for experimental research on a regular basis with publications of theoretical articles by distinguished researchers. In a Theoretical Article, an empirically grounded theory or theoretical idea is presented succinctly and intelligibly to a broad audience. The editorial team will invite distinguished researchers on a regular basis to write a theoretical article. However, original theoretical papers can also be submitted unsolicited to Experimental Psychology. As a matter of fact, we would love to see many more theoretical papers published in Experimental Psychology in the future, and we invite all researchers to send us a theoretical paper for consideration.

Standard 2: A Good Journal Has Rigorous Peer-Review

The most basic obligation of a scientific journal for quality control is to perform peer review. Journal editors of Experimental Psychology are instructed to (typically) ask two independent experts in the field of study for their opinions. Referees are given evaluation criteria for assessment and they are asked to turn in their reviews within 21 days. It is obvious that good organization of the peer review process has by far the most impact on the quality of the publication process. Manuscript files are managed electronically by a manuscript portal that facilitates smooth, rapid communication between editors, reviewers, and authors. Even more importantly, we encourage our journal editors to reach a basic decision about the suitability or non-suitability of a manuscript submission after the first round of reviews. In most cases, a journal editor can assess the suitability of a manuscript fairly accurately after the author’s response letter. Therefore, an additional revision is invited at this stage only when relatively minor revisions are required for a publication. With this policy, we aim to limit the number of lengthy review rounds and to give authors quick feedback on the status of their submission. Furthermore, not all manuscripts are sent out for peer review. Manuscript submissions that do not fit the scope of the journal (see above) and/or have technical issues are rejected immediately by the editors after consultation with editorial assistants. About one-third of the submissions in the last year were desk-rejected, and in these cases authors received detailed feedback on the reasons within a few days.

Running a scientific journal requires competency from many involved parties: journal editors, editorial assistants, technical staff, reviewers, and submitting authors. Journal editors must be competent in the handling of manuscripts because the peer review system is not without flaws. Reviews can be biased, inconsistent, and sometimes even abusive. Accordingly, this process demands particular attention and arbitration by the handling editor. At Experimental Psychology, we are in the fortunate situation that we have an internationally renowned board of Associate Editors with a strong research background in different areas of experimental psychology (for a full list, see the journal homepage). In addition, we have a large board of consulting editors that can help out when a particular submission does not fit the editor’s expertise. First-hand research expertise is necessary for a fair and professional handling of research papers that often requires a delicate balancing and weighting of reviewers’ arguments and concerns and formulating recommendations to assuage these concerns. Needless to say, journal editors are in a position of power here that must be used wisely and responsibly.

While we are proud of our excellent board of journal editors, we also know that even they can make errors from time to time. Therefore, authors have the possibility to contact the Editors-in-Chief directly when they have the impression that the submission was treated unfairly. We promise you that we will handle all incoming requests and complaints confidentially and with great respect. In our own editorial conduct, we feel obliged to the “Code of Conduct and Best Practice Guidelines for Journal Editors” published by the Committee on Publication Ethics (https://publicationethics.org/; COPE). The guidelines explicitly give authors the right to appeal editorial decisions. In addition, we acknowledge publications of corrections, clarifications, retractions, and apologies when needed. With these measures, we aim to create a climate of trust and mutual respect that makes the publishing process to a satisfying experience for all players.

Standard 3: A Good Journal Is Transparent

Research involves many decisions and transparency on these decisions in publications is vital to every science. The journal Experimental Psychology was one of the first signatories of the Transparency and Openness Promotions (TOP) guidelines published in the year 2015 (Nosek et al., 2015). These guidelines require that the raw data underlying the main findings reported in the article will be made available to the public before publication (“open data”). Publication of the raw data is mandatory but exceptions are possible when authors have concerns, about ethics, the security of personal data, or intellectual property. In this case, the exceptional circumstances should be stated clearly in the cover letter to the editors. In the past, Hogrefe Publishing provided its own data repository on https://econtent.hogrefe.com/, where authors could deposit their raw data for public access.

Starting with our time as Editors-in-Chief, we will discontinue using the service for hosting research data. The main reason is that the files deposited in the journal’s data archive are not referenced by a persistent identifier, such as a Digital Object Identifier (DOI). Persistent identifiers are important because they ensure future access to unique published digital objects, such as a text or dataset. Fortunately, public research data repositories providing a DOI for uploads now abound that are available free of charge (for a list, see http://www.re3data.org). We ask authors to use one of these repositories for future submissions. We also encourage authors to deposit research materials (e.g., stimulus material) that are needed to reproduce the published experiment. We also intend to introduce badges (provided courtesy of the Open Science Framework) in published articles that signal to the reader what contents were made available to the public.

Another move towards more transparency consists in the removal of paywalls and other barriers that block dissemination of academic research articles. For publications in Experimental Psychology, authors can choose between a publication in the traditional, subscription-based model (“pay to read") and a publication with immediate open access to everyone without a paywall (“Hogrefe Open Mind”). For an open-access publication, authors are required to pay a one-time article fee and with this, the article is made available online to anyone, anywhere in the world, at any time. We, the journal editors, believe that the current hybrid open access model is a temporary solution during the transition towards full open-access. As a first move in this direction, Hogrefe has generously agreed that one article per journal issue will be published open-access (EiC’s pick), that is, free of charge for the author(s) of the article. In the previous journal issue 4, 2018, the article “Self-serving bias in memories” authored by Zhang, Phan, Li, and Guo (2018) was made available open-access for the public. For this issue, we selected the article “Implicit Association Test as an analogical learning task” authored by Ian Hussey and Jan De Houwer (2018).

Standard 4: A Good Journal Honors the Value of Reproducible Data

As we mentioned above, the value of replications has been overlooked in the past years – journals were more into publishing fancy or sexy findings than valuing replications of established findings. Yet, a truly independent and direct (not conceptual) replication ensures that a particular effect is reproducible (Erdfelder & Ulrich, 2018), thereby adding to the importance of the effect. Therefore, a good journal has to publish methodologically sound replication studies independently of the results.

An important tool for confirmatory research is preregistration of study plans. Experimental Psychology was at the forefront of psychological journals when it introduced the Registered Report as a new article type in 2013. A Registered Report is a preregistered study plan detailing the theoretical background, empirical hypotheses, methods, and data-analytic strategies for a planned but not yet conducted experiment. The study plan is evaluated by scientific peers and, importantly, an editorial decision on acceptance is made before the results of the experiment are known. A central advantage of the preregistration is that it eliminates hypothesizing after the results are known (HARKing; Kerr, 1998) and withholding negative results from publication (Ioannidis, 2005). In addition, the Registered Report format is particularly effective for preregistration of replication studies that are conducted to assess the reproducibility of important study findings. Experimental Psychology acknowledges the importance of close replication attempts and encourage researchers to use the Registered Report format for this purpose.

Standard 5: A Good Journal Is Author-Friendly

While initiatives calling for more transparency and scrutiny during the publication process are very welcome, the implementation of concrete practices often comes with costs in the form of increased bureaucracy for authors and other parties. To counter this tendency towards bureaucratization, we will regularly check the internal work flow and the submission guidelines for redundant requirements and unnecessary obligations. For example, while the requirement to provide the raw data during manuscript submission was in line with our standard for transparency, our internal check revealed that only a negligible fraction of reviewers actually accessed the raw data during review. This means that submission guidelines forced authors to make the raw data available to reviewers, who did not use them. Therefore, we have now removed the requirement for a data deposit during submission; however, the manuscript still has to contain a permanent URL pointing to the raw data before it can be accepted for publication. In short: authors must deposit the raw data in a public repository, but this is now possible at a later point after submission. We believe that this policy is a good compromise between our demand for less bureaucracy and the justified call for “open data.”

Furthermore, authors must trust that their submission is processed quickly and responsibly by the journal office. In 2017, the editorial team needed 69 days on average from manuscript submission to the first decision. While this value is acceptable, we aim to reduce it even further with additional optimization of internal work procedures and by the installation of additional checks and reporting tools. It also helps that the Editor-in-Chief position is now filled by two people. Our ambitious goal is an average time of below 50 days from manuscript submission to first decision for the next year.

The Editorial Board

When new editors take over, it is always a time of transition and change for a journal. First, we want to thank the previous Editor-in-Chief Christoph Stahl and his team of assistants, Frederick Aust and Marius Barth, for their dedicated and excellent service to Experimental Psychology. Christoph took over the editor position from Thorsten Meiser in 2013. Christoph’s achievements include (and are not limited to) assembling an international board of editors, streamlining the editorial work flow, and most notably, the introduction of the Registered Report article format (e.g., Bell, Röer, Marsh, Storch, & Buchner, 2017; Ernst, Hoekstra, Wagenmakers, Gelman, & van Ravenzwaaij, 2018; Teige-Mocigemba, Becker, Sherman, Reichardt, & Klauer, 2017), and bringing the journal to the forefront of the open science movement. Experimental Psychology will continue to benefit from Christoph’s expertise in the future, as he has generously agreed to stay in the board of consulting editors. In addition, Adele Diederich, Magda Osman, and Chris Donkin have decided to step down from the board after having served as editors for several years. We would like to thank them for their long-standing dedication and excellent work for this journal!

We are happy and proud to have the outgoing editors replaced with excellent new editors who will augment the expertise of the existing editorial board in influential subfields of experimental psychology. Matthias Wieser is Professor of Clinical and Biological Psychology at the Erasmus University Rotterdam, Alexander Schütz is Professor of Experimental Psychology at the University of Marburg, and Professor Jörg Rieskamp is Head of the Center for Economic Psychology at the University of Basel. We would like to thank these colleagues for their willingness to serve as associate editors for the coming years. Our gratitude of course also goes to the other associate editors Tom Beckers (KU Leuven, Belgium), Arndt Bröder (University of Mannheim, Germany), Gesine Dreisbach (University of Regensburg, Germany), Manuel Perea (University of Valencia, Spain), James Schmidt (Université de Bourgogne Franche-Comté, France), Samuel Shaki (Ariel University Center, Israel), and Sarah Teige-Mocigemba (University of Marburg, Germany) who have decided to stay with us in the board.

Call for Special Issue

Experimental Psychology invites submission of proposals for thematic special issues on a wide range of topics in experimental psychology, particularly those focusing on timely or emergent research areas. Consistent with the journal’s priorities, articles must meet our primary criteria, namely the rigorous use of experimental methodology and/or a strong and innovative theoretical contribution to experimental psychology as a basic science. A special issue typically comprises a review of the special issue topic as well as empirical research papers or articles on methodological innovations (see, for example, Wiegmann & Osman, 2017). A target article might also be published together with one or more invited comments. Proposals can be submitted at any time (for details see https://www.hogrefe.com/j/exppsy).

Coda

Is Experimental Psychology a quality journal? Our short answer is a resounding “Yes!” The quality of Experimental Psychology is not only obvious in hard scientiometric numbers but also by evaluation with soft, qualitative criteria demanding a specialized, peer-reviewed, transparent, and author-friendly journal. Furthermore, the international journal Experimental Psychology has now existed for over 15 years, and before its foundation in the year 2002, it was built on the almost 50 years of tradition of its predecessor, the Zeitschrift für Experimentelle Psychologie (formerly Zeitschrift für Experimentelle und Angewandte Psychologie). This long tradition is the result of hard work, professionalism, and excellence. We want to invite our fellow scientists to become a part of the journal’s history by sending to us your best papers!

We thank Anand Krishna for providing comments on an earlier version of the article.

References

Andreas B. Eder, Department of Psychology, Universität Würzburg, Röntgenring 10, 97070 Würzburg, Germany,
Christian Frings, Department of Psychology, Universität Trier, Universitätsring 15, 54286 Trier, Germany,