Hiring Problems AI Could Solve But Probably Won't

If you make a living identifying human potential, recruiting talent, or are interested in hiring the right people for the right role, there are good reasons to be enthusiastic about the rise of artificial intelligence (AI) as a recruitment tool.

Anywhere in the world - and at any given point in time - labor markets are inefficient, with disengaged and underperforming employees in jobs that are a poor fit for their abilities, interests and personalities; critical roles that remain vacant for a long time despite no lack of investment to attract and find suitable candidates; and people with real talent and potential who struggle to find work.

Although such inefficiencies are partly structural, they are also the product of organizations' limited understanding of human potential, or at least their inability to translate their understanding into effective hiring practices.

Here's where AI could help: by looking at a wider range of signals - including deeper signals, which escape even trained human observers and traditional talent tools - it may reveal the hidden connections between a person's background and their career potential, identify the fundamental "grammar" of talent, and ultimately upgrade the quality of our hiring decisions, making the job market less inefficient. However, a prerequisite to enable this would be to first address five big hiring problems that we shouldn't really expect AI to solve:

  1. Predicting performance: You can predict only what you measure. Since most organizations have limited data on employees' actual job performance, there is not much AI can do to improve the accuracy of their predictions. This is really a shame, since as Ajay Agrawal argued in a brilliant recent book, the essence of AI is cheaper - more scalable and efficient - prediction. As a result, organizations' application of AI is limited to predicting human judgments: "what would a human do in this situation?" While the data required to answer this question is easy to get, this is how AI can end up emulating or even augmenting human biases. For example, it is far easier to predict the degree to which candidates will be liked during a job interview than how well candidates will actually perform on an interview: the former simply requires interviewer ratings; the latter actually requires connecting candidates' behaviors to their future performance on the job. By the same token, training AI to predict whether an employee will be rated positively by their boss once on the job, or whether they will get promoted, is rather different from predicting a candidate's actual performance or contribution to a team, unit, or organization. To be sure, this problem is not new to AI, but unless we resolve it AI will inherit it. In fact, AI's ability to enhance the efficiency and scale of prediction may exacerbate the old hiring problem of lacking objective indicators of job performance to validate our selection methods. Under such circumstances, our hiring protocols may convey the illusion of accuracy even in the presence of bias. For instance, a person may be hired based on job-irrelevant attributes (e.g., gender, age, attractiveness, and ethnicity) but such discrimination will be masked if the interviewer and subsequent manager share the same biases, or when they are the same person. To make matters worse, these unfair selection criteria may actually influence clients' perceptions of the candidates - i.e., attractiveness increases not only candidates' ratings on job interviews, but also managers' ratings of their job performance, and clients' perceptions of their competence and trustworthiness. All this means that AI can legitimately predict candidates' success while also perpetuating bias and unfairness: neglecting the variables that should matter but don't, focusing on the variables that matter but shouldn't. 
  2. Assessing potential: Even if AI improved our ability to predict performance, this would generally be limited to situations where the future is fairly consistent with the past. The old precept in industrial-organizational psychology is that "past behavior is a good predictor of future behavior", so long as the context doesn't change. That is, people are fairly consistent and predictable, but in order to predict something a person has never done before, past behavior alone is of limited value. As Ajay Agrawal and colleagues note in Prediction Machines: "AI cannot predict what a human would do if that human has never faced a similar situation." For instance, organizations interested in promoting people from individual contributor to manager roles - or manager to leaders - will inevitably focus on candidates' past performance to decide on their promotability. However, there is a big difference between being a good performer when you are an individual contributor - and your task is to follow orders, solve relatively well-defined problems, and manage mostly yourself - and being a manager or leader of others. This is why so many employees don't perform well when promoted to the next level (into roles they are neither able nor willing to do). Training AI models with past performance data - even if, contrary to what I indicated in point 1, that data were actually available - will not solve this issue. What will, then? Accepting that some of the best potential managers or leaders in the organization may have been average, or even poor, performers as individual contributors. That is, not making past performance a prerequisite for being selected into a different type of role. Instead, organizations should focus on the known ingredients of managerial or leadership potential, such as expertise, people-skills, integrity, and self-awareness (with or without AI).
  3. Understanding potential: Predicting performance is critical for assessing potential. Because potential is a person's probability to perform well in the future. And if you can't predict something, you shouldn't really attempt to explain it -unless you are a sports pundit or political analyst, which apparently gives you the authority to provide a perfectly rational explanation for something you just failed to predict. However, just because we can assess potential doesn't mean we understand it. Without a verifiable and refutable theory, data alone has fairly limited value. A blackbox AI model may effectively predict future behaviors without necessarily providing much insight into the "why" of the relationship. For instance, there is a difference between linking certain physical properties of candidates' speech or nonverbal communication during an interview to their future job performance, and also having a plausible and defensible explanation for such linkages (e.g., they are signals of EQ, self-presentation, or curiosity). The digital age has enabled us to collect a much wider range of signals, and AI has advanced our ability to translate that data into prediction, but ideally, we also want to explain the nature of any prediction underlying a hiring decision. This is where science is critical, for science is data + theory. It is only when we truly understand the causes of future performance that we will be able to improve our hiring practices - prediction alone is not enough.
  4. Breaking our love affair with intuition: Even if the three previous problems are solved, that doesn't mean that AI will fix our hiring mistakes. Why? Because there will always be other data points and decision making criteria that are unaffected by, and independent from, AI. This is particularly relevant if AI is used in conjunction with human judgment, rather than as a substitute to it. The biggest proof of this is that we have seen 100 years of solid science overruled by intuition during common hiring practices and decisions. There is a gap between the methods that work and the ones hiring managers love to rely on. The problem is not lack of evidence on what works and what doesn't, or a shortage of predictive tools or methods, but that people prefer to play it by ear, assuming they are a great judge of character when in fact they are not. "I know talent when I see it," "this person is a great culture fit," or "what a charismatic guy," are all everyday illustrations of instinctive judgments that will likely eclipse any data and hard facts. In a data-driven world, the MBTI would not be the number one assessment tool, the unstructured interview would not be the preferred selection method, and the main criterion for deciding whether something worked or not would not be face validity or gut feeling, but hard facts, including data that can expose our the errors of our intuitive decisions as a big mistake, and make us (and our methods) accountable. Paradoxically, the biggest utility of AI will come from making predictions that are discrepant with human predictions, calling into question our intuition (and biases). But it is in those exact situations that we will probably ignore AI and go with our instincts instead. By the same token, when AI and our instincts align, we will probably use AI to justify a decision we were going to make anyway.
  5. Killing the politics of selection: As if our love affair with our intuition weren't enough, our subjective and data-free decisions are not entirely random - they are influenced, if not co-opted, by our personal agendas and the wider politics that contaminate the vast majority of hiring decisions. Just imagine a world in which AI has substantially enhanced our ability to predict performance, assess and explain potential, to the point that we are willing to ignore our intuition and trust the machines. That would still leave us with one big hurdle, which is the political implications of making a choice that is somehow disadvantageous to our own career (or the temptation to make a choice that is better for us rather than the organization). For instance, what if hiring a superstar exposes our own limitations? What if hiring candidate X puts our own job at risk (because they are clearly willing and able to get it in a few years)? What if hiring candidate Y will annoy my manager? At times the politics may have less to do with our own individual interests than the wider political context of the organization: for instance, you may follow a decidedly strategic and data-driven approach to hiring a disruptive leader to bring much needed change to the organization, and - for the same reasons that make that candidate perfect in theory - the organization will react adversely to that appointment and hinder their chances of success. However, the alternative - hiring someone who is a good fit and perpetuates the status quo - would surely not be the solution. The bigger point here is that even the best tools can be deployed detrimentally in the absence of strong ethics or the presence of harmful interests.

To conclude, there's no doubt that AI could vastly elevate our ability to fix our hiring problems, so long as we can first acknowledge and address some of the main historical limitations to our staffing processes, which are still very much alive today. Failure to do so will result not only in limiting the potential contribution of AI, but also exacerbating existing problems.

Popular

More Articles

Popular