Skip to main content
Free AccessEditorial

The Field of Psychological Assessment: Where it Stands and Where it’s Going – A Personal Analysis of Foci, Gaps, and Implications for EJPA

Published Online:https://doi.org/10.1027/1015-5759/a000412

An Initial Overview

About 5 years ago, I submitted for the very first time a paper that I had coauthored with a colleague to EJPA. I scanned the types of articles that EJPA usually published at that time, and I honestly wasn’t sure whether our article’s content area and assessment focus would fit the journal’s scope. Now, as the new editor of EJPA, I am faced with the very same question on a somewhat different level: Which articles will be interesting to the community and to the journal, and which will not? And even further, where do potential gaps in the field of assessment exist, and how can these gaps be filled?

To obtain a broad overview, I ran a crude and, thus, admittedly, not very comprehensive search in the Web of Science by combining the search terms psychological and assessment with various other search terms (acknowledging that different data bases might lead to different search results; Bakkalbasi, Bauer, Glover, & Wang, 2006). To allow for a comparison of time-related developments, I ran the search first for the years 2014–2016 and then for the years 2008–2010. The results of this search are displayed in Figure 1.

Figure 1 Number of publications in the field of psychological assessment and its subdisciplines. Shaded bars represent absolute frequencies (left y-axis), and unshaded bars represent relative frequencies (right y-axis). The search was conducted on January 8, 2017 in the Web of Science. It combined the search terms “psychological” and “assessment” with “clinic*” for clinical assessment, “education*” for educational assessment, “industrial organizational” for I/O, “personality” for personality, “methodolog*” for methodology, and “cogniti*” for cognitive. The relative frequencies do not sum to 100% because not all of the overall hits were matched with one of the subcategories, and the subcategories were not mutually exclusive.

In looking at the numbers, there were two main messages I took from them. First, the field of psychological assessment is on the rise. The number of hits from 2014 to 2016 (1,411 hits) had increased by almost 50% compared with 2008 to 2010 (976 hits) as can be seen by comparing the left and right sides of Figure 1. Second, the focus on assessment in the fields of clinical, cognitive, and educational psychology is strong (with a slight increase in the number of hits for the last), followed by articles on methodological topics in assessment and personality assessment (but note that in the search, none of these categories were mutually exclusive). There were surprisingly few hits when I conducted a search on industrial and organizational assessment. This is most likely a problem with the particular search terms I used and not so much with the in reality small number of publications in this area, but it might also indicate that this area could be more strongly represented in psychological assessment journals. In addition to this issue, the relative frequencies for the different fields remained virtually unchanged between the first and second time periods as can be seen by comparing the pattern of the unshaded bars on the left for 2014–2016 with the ones on the right for 2008–2010.

After running this search and taking a look at what has been published in EJPA in the past, I felt that, overall, the journal comprehensively reflects the diversity of the field. That is, the core mission of the journal is to advance psychological assessment across content disciplines, and this is what it will continue to strive for in the coming years. Stated differently, submissions from all psychological content areas are very welcome as long as their focus is on advancements in the field of assessment. Had I known this when submitting my first paper 5 years ago, I would have been a bit more confident about sending it to EJPA, but in my letter to the editor, I might also have stressed more explicitly how the article provided a good match with the journal’s focus on assessment.

Tradition and Innovation

We live in a world of constant change, and technology has significantly altered the way our society functions. The field of psychological assessment is not immune to these influences. Innovative assessment instruments that employ computer-based simulations, that use behavioral and process-related data to improve the assessment process (cf. stealth assessment; Shute & Ventura, 2013), or that widen the extent to which existing instruments are able to capture constructs are only a few examples of what this area might contribute to the field. For instance, in the OECD’s educational large-scale assessment program known as the Program for International Student Assessment (PISA), computer-administered science tasks (including simulations and interactive tasks) are employed to allow for a more realistic and diverse set of science literacy tasks (e.g., OECD, 2016).

Paper-pencil-based instruments such as classical tests of intelligence and personality questionnaires have been, currently are, and will remain the backbone of psychological assessment and the science surrounding it. In fact, almost all submissions to EJPA fall into this category (for two recent examples, see Schult, Fischer, & Hell, 2016; Smits, Timmerman, Barelds, & Meijer, 2015). In addition to these kinds of papers, the journal explicitly invites submissions that target innovative assessment approaches, whether they involve technological devices such as computers and tablets or some other sort of innovation.

Of course, it is my personal choice to explicitly mention innovative assessment and computer-based testing as areas of major development that are welcomed by the journal. However, this is not meant as a shift in focus but rather as an extension of the existing focus. In fact, I was surprised to see the small extent to which innovative assessment methods and instruments were represented when I conducted my literature search. As an add-on analysis, I combined the two major search terms (psychological and assessment) with one of the following terms: 21st century skills, stealth, computer based, computer assisted, and tablet. The overall hit rate for all of them was less than 1% (and this included even my own first submission to EJPA). With this low level of representation in the back of our minds, it is my vision for the field that we will experience some advancements in the coming years that will utilize new technologies in such a way that they will complement assessment theory and practice and that this will, in turn, lead to more valid assessment procedures in research and in applied settings.

Implications for Authors and for the Publication Process

Irrespective of content, the mission of EJPA (and, arguably, the field of psychological assessment as an entity) is built on three distinct cornerstones: a firm connection to the field of psychological science, a focus on assessment and the substantial advancement of knowledge in the field, and a commitment to the highest levels of quality and transparency in the empirical aspects of the contributions.

Strong ties to psychological science: Saying that any assessment instrument needs to be grounded in psychological theory might be viewed by some people as a statement of the obvious, but it is surprising how often this simple and yet fundamental prerequisite is not fulfilled. Too often, an assessment instrument’s name or label is mistaken for its actual content, while it remains unclear how the items are actually mapped onto the underlying theoretical definition. There will be many cases in which not all of the relevant aspects of a theory can be adequately represented by a set of tasks (and the more complex the target construct is, the more prevalent this issue will likely be). In these cases, a clear distinction between the theoretical framework and the assessment framework might help to clarify where the assessment instrument contains blind spots – even if they are deliberate – with regard to construct coverage (cf. Michie, Johnston, Francis, Hardeman, & Eccles, 2008, for an interesting analysis of how to connect theory with intervention). Without giving precise meaning to an assessment instrument through psychological theory, developing and validating such an instrument remains essentially an empty exercise.

Focus on assessment and advancement of knowledge: The number of submissions to EJPA have consistently been on the rise, mirroring the increase in the number of articles published in the field of psychological assessment (Figure 1). This increase in submissions, in turn, means that selection criteria need to be applied; and thus, the amount of new knowledge generated by an article from an assessment perspective is an important one. That is, submissions that mainly target substantive research questions with only a secondary focus on assessment will have a better fit with content-focused journals, of which there are many excellent ones out there. Obviously, an evaluation of the amount of new knowledge created by an article is to some extent a subjective one, but authors need only to peruse the journal if they wish to find a plethora of good examples of articles that generate considerable amounts of new knowledge. However, articles that target specific research questions might be relevant to EJPA as well. Brief reports were introduced a couple of years ago as a consequence of the need for a dedicated format for specific research questions. In this, EJPA offers a variety of distinct formats (i.e., original articles, brief reports, and multistudy reports). As communicated in previous editorials, there are some topics that are usually of little interest to EJPA such as papers that are primarily methodological in nature or papers that offer only translations of existing instruments (Ziegler & Bensch, 2013).

Quality and transparency: EJPA is dedicated to upholding scientific standards of the highest possible quality and to adhering to the process of rigorous peer review. We ensure these aspects by attracting highly committed associate editors, an experienced board of consulting editors, and hand-picked external reviewers. At the same time, we acknowledge that transparency is becoming increasingly important – maybe in particular in the field of psychological science – and that sometimes the final manuscript as the “end product” is not sufficient for communicating a complete understanding of what was done. In psychological assessment, it is probably the actual empirical analyses and how they were implemented that are key to the findings. With this in mind, authors will now have the option to submit their codes and results (i.e., the inputs and outputs from statistical software packages such as Mplus, R, SAS, SPSS, and so forth) along with their manuscripts. This information will then be passed on to the external reviewers for further inspection. As an additional measure, when an article is accepted for publication in EJPA, authors will henceforth be required to submit both their inputs and outputs along with a brief description as Electronic Supplementary Material (ESM), and this information will be published along with the final article.

With this first editorial, it was my goal to cover some general developments in the field of psychological assessment and aspects that are fundamental for publishing in EJPA. There are probably few surprises with regard to what is deemed important, but sometimes it can be helpful to explicate the implicit. In fact, thinking back to my first submission, the article underwent several rounds of revisions before it was finally accepted. Had I been aware of all the points mentioned in this editorial, it might have spared me (and the reviewers, for that matter) at least one round of revision.

It is with this hope and spirit that this editorial was written: To provide some general information that is broad and yet relevant and that might help to guide authors’ decisions about whether EJPA is a good outlet for their research, or stated differently, to help them determine how they can maximize their chance of success in EJPA. As in the past, future editorials will continue to address diverse topics such as assessment-relevant methodological aspects, journal-related policy information, or thoughts and opinions with respect to the field of psychological assessment.

References

  • Bakkalbasi, N., Bauer, K., Glover, J. & Wang, L. (2006). Three options for citation tracking. Google Scholar, Scopus, and Web of Science. Biomedical Digital Libraries, 3, 7. doi: 10.1186/1742-5581-3-7 First citation in articleCrossrefGoogle Scholar

  • Michie, S., Johnston, M., Francis, J., Hardeman, W. & Eccles, M. (2008). From theory to intervention: Mapping theoretically derived behavioural determinants to behaviour change techniques. Applied Psychology, 57, 660–680. doi: 10.1111/j.1464-0597.2008.00341.x First citation in articleCrossrefGoogle Scholar

  • OECD. (2016). PISA 2015 Assessment and analytical framework. Science, reading, mathematic and financial literacy. Paris, France: OECD Publishing. doi: 10.1787/9789264255425-en First citation in articleCrossrefGoogle Scholar

  • Schult, J., Fischer, F. T. & Hell, B. (2016). Tests of scholastic aptitude cover reasoning facets sufficiently. European Journal of Psychological Assessment, 32, 215–219. doi: 10.1027/1015-5759/a000247 First citation in articleLinkGoogle Scholar

  • Shute, V. J. & Ventura, M. (2013). Measuring and supporting learning in games. Stealth assessment. Cambridge, MA: The MIT Press. First citation in articleCrossrefGoogle Scholar

  • Smits, I. A. M., Timmerman, M. E., Barelds, D. P. H. & Meijer, R. R. (2015). The Dutch Symptom Checklist-90-Revised. Is the use of the subscales justified? European Journal of Psychological Assessment, 31, 263–271. doi: 10.1027/1015-5759/a000233 First citation in articleLinkGoogle Scholar

  • Ziegler, M. & Bensch, D. (2013). Lost in translation. Thoughts regarding the translation of existing psychological measures into other languages. European Journal of Psychological Assessment, 29, 81–83. doi: 10.1027/1015-5759/a000167 First citation in articleLinkGoogle Scholar

Samuel Greiff, Cognitive Science & Assessment, University of Luxembourg, 11, Porte des Sciences, 4366 Esch-sur-Alzette, Luxembourg,