Vervoe logo

Skill Testing Validity

How do you ensure your test method is valid?

  • Any test that directly mimics what a person will do on the job can be considered “validated.”
  • Tests of personality and soft skills are a riskier prospect even when they are “validated,” because they often lack the proper validation required to be EEOC compliant.
  • Positive candidate experience and perceived fairness are two of the primary reasons why skill testing is an effective and expedient hiring practice.

Introduction


The value of testing candidates prior to hiring them is having an expedient way of assessing their fitness for the job. The decisions made based on a test, however, cannot be any better than the information provided by the test, which makes it extremely important that the test be an accurate representation of the constructs that it represents. This is somewhat difficult, however, because the constructs being assessed, such as knowledge, skills, attitudes, and cognitive processes, cannot be known objectively and must be inferred. To that end, there are many statistical measurements that are used to assess the degree to which tests are reliable (consistent) and valid (accurate).

There are many types of psychometric validity, and it is a rare test (if such even exists) that hits every type. Looking specifically at tests for finding job fit, there are a few different types of validity that are particularly relevant, not just to ensure that the hire is a good one, but to ensure compliance with EEOC regulations.

Types of Validity


Face Validity: Does it look like the test is assessing what it claims to measure?


Face validity is the most basic form of validity, and requires general consensus that someone taking this test would need to exhibit the constructs that the test is assessing. For instance, a set of math problems being used to assess arithmetic ability has more face validity than a set of word problems, because the latter are assessing a combination of arithmetic skills and reading comprehension (which means the test is assessing arithmetic skills in a specific context, but not in general), etc. For some types of skill tests, this is the main type of validity available, and sometimes the only one that can be obtained when a test is first created.

Generally, these sorts of skill tests require job candidates to engage in tasks that could theoretically be completed by anyone who can do the job. As long as the tasks mirror activities that the individual will need to be able to complete in order to do the job properly, the questions have face validity and are EEOC compliant. That said, a company should always take care not have “disproportionate impact” upon a particular demographic group, as this can be an indicator that something is wrong with the test. For instance, if only 75% of members of Demographic A pass the test, while it’s 93% for other demographics, the test still may be EEOC compliant, but an investigation is a must.

Content Validity: Does the tests cover the full range of the construct that it is supposed to measure


Content validity refers to whether a given test covers a representative sample of the full range of the construct that it is measuring. For job tests, the matter of validity should be focused most on this point. In addition to ensuring that the questions appear to be about topics related to the position (face validity), the set of questions (as a group) needs to assess a sufficient range of tasks so that the evaluator can know that the set of answers paints a picture of whether the individual is capable of doing the collection of tasks required by the job.

As long as the set of questions provides a representative sample of the tasks the candidate will need to do on the job, and someone who can do the job properly can definitely provide solid answers to the questions, the questionnaire has content validity and is EEOC compliant. (Again, it is important to be aware of the possibility of disproportionate impact, and to be prepared to alter the task[s] if you are seeing a particular demographic scoring lower than another.)

Construct Validity: Does the test actually measure the theory-based construct that it claims to measure?


In hiring, construct validity has the greatest bearing on tests that do not directly assess knowledge and skills that will be used on the job. Rather, it is about determining whether, say, a test of attention to detail (an example of a “soft skill”) or extraversion (personality trait) actually demonstrates that the person has the indicated characteristic. When HR people ask if a test is “validated” or has “psychometric validity,” this is usually what they are talking about. But, when it comes to tests of skills to be used directly on the job, construct validity is far less relevant compared to face and content validity.

Tests of general cognitive ability and/or personality, however, absolutely need to have construct validity to be EEOC compliant, because they are indirectly related to successfully completing the actual tasks of the job. For a personality or “soft skill” test to be valid, it must assess a theoretical construct whose existence can be defended through a review of the scientific literature, and statistics must show that the test questions do indeed work together to describe a coherent construct.

Predictive/External Validity: Does the information we learn from test performance predict/apply to performance in other situations?


Predictive/External validity is about whether the test predicts performance in areas that have not been directly tested. Even though a test may have the above types of validity, there remains a difference between the tasks that are tested in a context-free fashion, and doing the actual tasks in the context of the job. If a company is going to use a test, it is always helpful to prove that the test actually predicts high performance on the job and/or good fit with the company/team. For a test of skills that will definitively be used on the job, predictive validity is very helpful, but technically not required, for EEOC compliance. But, for any other type of test, this is crucial for a company to have, and they have to run the numbers within their own company.

One of the most important points to recognize is that predictive validity does not transfer between companies. If a study proves that Test A predicts high performance at a Big 10 consulting firm, that does not help a Big 4 accounting firm at all (especially if they are having to defend themselves against an EEOC-related suit). What good is a personality test (for example) if you cannot prove that it distinguishes between high and low performers in your company? 

Unfortunately, most tests cannot do such a thing, in part because most people who run tests of predictive validity do so only to prove that a particular combination works – what they fail to do, however, is prove that it is the best or only combination that works (this can be done, but it is very difficult and work-intensive to do). In turn, this means they haven’t actually proven that other scores should be causing people to be rejected, which can be risky and can cause companies to miss out on fantastic talent. 

Moreover, a given company can prove that a test has predictive/external validity only after using it for a while and measuring on-the-job performance, which means that the company cannot initially assume that these indirect measures of job success are valid.

Conclusion


One of the primary values of using skill-based testing in a hiring process is that there is a clear correspondence between the items on the tests and the tasks that will be performed on the job. As such, skill tests have both face validity and content validity, which makes them EEOC compliant. In addition, these tests expediently show which candidates are nominally capable of doing the job before employers spend time assessing a candidate’s application materials. By contrast, indirect tests of future job performance, like personality tests and assessments of general cognitive abilities and/or “soft skills” require extensive assessment to confirm construct and predictive/external validity, which makes them a riskier and less expedient move.

In all cases, however, it is imperative that employers keep an eye out for demographic disparities in performance, and to launch an immediate investigation if a significant one is found. At Vervoe, we are constantly monitoring our tests to ensure that candidates are taking a fair test that assesses the skills they will use to get the job done well. As an added benefit, many companies find that using fair tests also yields a more diverse slate of finalists that is more representative of the wide range of talent that exists in the world.

Though personality and cognitive assessments are very hard to validate for the hiring process, they can be helpful in onboarding and coaching new hires in a way that is tailored to their uniqueness, which can give them a smooth and powered-up start. But, what matters most of all is the candidate experience. An enjoyable candidate experience, combined with a perception that the process is fair (along with the test actually being fair!), is the best way to ensure that candidates will not only show their capabilities to the company, but have a positive view of the company regardless of the outcome of the hiring process.