Some organizations are just lucky. They have a huge, Niagara-like, flow of applicants knocking at their front door.
They hire teams of people to scan paper resumes. But any experienced recruiter knows resumes contain a considerable amount of fiction.
Bad applicants often have good resumes, and good applicants often have bad ones. I’ve even heard of clever applicants seeding their electronic resumes with key words printed in background font colors to hide them.
Is there any solution to effective applicant screening?
Yes, several in fact. For now, though, we’ll concentrate on a few characteristics of an effective screen. They include a realistic job preview; job related items; and, items that predict performance (remember at this stage, the main task is to screen-out blatantly unqualified applicants).
Providing a realistic job preview means exposing the applicant to what a day-on-the-job would look like. It describes things like work ethic, company values, interaction with other employees, working hours, typical problems encountered, and so forth.
A realistic job preview can cut applicant flow by about 10 percent. Why? Applicants generally don’t know what much about a specific job, so it’s often helpful to let them know the hard-facts ahead of time.
A job preview, however, should not be confused with promoting the job. If the applicant has to share a desk with a troll in the boiler room, then be honest. If he/she has a slim-to-none chance of being promoted, say so. Otherwise, exactly when do you want new employee to learn their co-workers are failed experiments from a secret government genetics lab?
This means that items should look and sound legitimate. Don’t ask applicants to describe what their favorite animal smells like (correct answer = “green”), why they want the job (to earn money, duh!), or some other non-job related question (correct answer=”What the!@#$@?”).
Smart applicants can fake you out and dumb ones might just get lucky. Anything less than job-related items make the HR department look arrogant, foolish, and incompetent. Besides, almost everyone knows many HR departments seldom take the time or effort to determine whether the question predicts performance or not.
Stay serious. Look competent. Be professional.
This is where junk-science crawls out of the woodwork.
Suppose I survey all my employees, compile a list of characteristics, and use that list to develop screening questions. Good idea, or bad? Unfortunately … a bad one.
For one thing, a group of employees have all been pre-screened and are good enough to stay employed, like collecting data about physical coordination from professional athletes. Employees have a “restricted-range” of skills.
Second, because the differences between a good and a bad employee are very small, it’s hard to determine which skill(s) make a difference and which do not. Third, when you compile a group average, you lose all detail about individuals within the group (after all, you are hiring individuals, not groups).
Fourth, un-proctored employee responses are usually part opinion and part facts. Finally, although you might see statistical differences between employee groups, these differences probably represent chance findings that don’t have any connection with performance. You know, like correlating ice-cream sales at the beach with shark attacks.
Using this kind of data to make hiring decisions is simply a non-starter because although it looks scientific, it is actually based on junk criteria.
Starting at the beginning
I covered this in another article, but the KEY to great hiring is understanding you are not hiring results — you are hiring employees with skills that bring results.
You don’t need me to tell you that results and skills are often two different things. The applicant either has skills, or not.
Why is this not done more often? Measuring skills takes special training and considerable practice. Any recruiting process that cannot separate skills from results will never deliver great results. So how does this work in practice?
Cluster jobs with similar competencies together into families. If this sounds complicated, it’s because you first need to master the art of identifying, then separating, measurable skills from results.
Skills are what employees bring to work — results are what they leave behind. Even the largest organization has only about 12-15 mainstream job families.
Smart application blanks
Once you know which employee skills make the difference between success and failure, you can build an application form with a series of items and questions. Some of these could be behavioral (i.e., similar to behavioral event interviews); some could include bio-graphical; and, others could be skills-based.
But that’s just the beginning. You must first do some homework to verify answers really do predict job performance. In other words, the items must be validated.
Validation can take several forms, but the objective is always to correlate items with on-the-job performance. Some people think this can be done using regular statistics, but that is not always possible. Regular statistics require all data points to be normally distributed (e.g., bell curve).
And, sometimes, even after you do fancy statistical conversions, the data could still function like an on-off switch instead of a light dimmer. It’s not a pretty sight. So, you have to graduate to using artificial intelligence algorithms (and you thought hiring was a slam-dunk).
Artificial intelligence algorithms find patterns in data. It does not care how data are shaped, or whether it falls into pretty bell-curves. Large organizations use AI to mine data for information, plan logistics, make medical diagnoses, trade stocks, and even drive game programs. But, as I wrote in an earlier article, even AI is subject to garbage-in-garbage-out problems. That’s why every smart application blank needs so much front-end work.
The end game
Nothing will predict your next superstar, but smart application blanks will reduce front-end screening time by calculating a probability of success (e.g., comparing applicant answers to known success patterns).
Using better-quality data reduces recruiter error allowing them to work first with applicants who have the highest probability of success. In this way, an organization can spend its time more efficiently, minimize front-end screening expanses, move quickly to engage the best candidates, and significantly reduce on-boarding time.