Advertisement
Article main image
Apr 13, 2023

You will probably know this yourself, but there would be many who agree that it is getting increasingly difficult to form a clear opinion about artificial intelligence (AI).

AI can be highly useful (ChatGPT), or it can be overly hyped (also ChatGPT).

It can improve ethical decision-making or cause widespread systemic discrimination.

It may be legal or illegal at any given time in various jurisdictions.

Most confusingly of all however, the phrase “artificial intelligence” connotes sentience, but it really just refers to a broad class of statistical techniques used to understand data.

Yes, it’s a lot more mundane than Terminators and HAL 9000s, but don’t be fooled – AI can also be a lot more insidious.

In some areas, the jury is still out

Don’t get me wrong. As AI has developed and become operational in the hiring space, it has become a legitimate value-add in talent acquisition and the larger HR tech stack.

If you consider AI’s location on the Gartner Hype Cycle (methodology gives you a view of how a technology or application will evolve over time), many AI techniques have matured past the early hype stage and are getting closer to the slope of enlightenment. This is where they will quietly become integrated into standard processes.

However, some types of AI are much earlier in their life and hype cycle (for instance, the metaverse, generated avatars, causal AI, large language models), and the jury is very much still out on where they will land.

Legal developments

 In the absence of US federal legislation on the topic (note: the non-binding White House Blueprint for an AI Bill of Rights doesn’t count but the European Union AI Act may ultimately be a model for the US and other regions), states and cities around the country have pushed forward their own attempts to regulate AI.

The most imminent new legislation is in New York City, whose Automated Employment Decision Tools law is scheduled to be enforced in just a few months’ time – July 5.

Here, it will require impacted companies to hire independent auditors to conduct so-called bias audits; as well as post resulting disparate impact calculations on their websites and provide candidates with advanced notice of AI tool usage.

However, this legislation continues to be debated and, as it currently stands, is fairly narrow in the scope of the tools that qualify as Automated Employment Decisions Tools – or AEDTs.

Standardization is needed

While these new AI laws are helpful, they are ultimately transitional.

What the industry really needs is standard legislation at the federal level, not a raft of slightly different regulations in each large city and state.

Furthermore – and this may be controversial – but all of these new AI laws attempt to regulate whatever parts of the hiring process involve AI rather than the outcomes of the entire process. They assume that AI is the main problem when the main problem has always been that humans make biased decisions.

We need to understand that AI can decrease bias, but it needs to be developed properly.

NYC’s AEDT law requires companies to post disparate impact calculations that candidates can review, which may cause some to conclude that 1) small, insignificant differences are evidence of discrimination, and 2) these differences are illegal or discriminatory in nature.

Both points very well could be the case, but also, might not be. The main point is candidates won’t be given enough information to know.

At the end of the day, AI tool outputs will be combined with human decisions that ample research shows will have varying degrees of built-in bias.

So, what really matters is the outcome of the overall hiring decision, not the outcomes of each part of the process.

Organizations should be transparent in the demographic makeup of their hiring pools, and whether that makeup is equitable compared to local applicant demographics.

Legislating every step of the hiring process, whether it includes AI or not, misses the point that it is the overall decision that matters the most.

Procedural integration

Our survey research shows that many HR executives do not even know whether their company uses AI in the hiring process.

This is not overly surprising, given the vast variety of AI that exists, plus the fact it is often baked into other functionality.

But as stated above, what really matters is how the stack works to hire better quality candidates, fairly and quickly.

Where once AI was viewed as a benefit of a tool, it can now be viewed as just another technique that contributes to that tool.

This does not mean that AI applications may not warrant special attention – they are, after all, often extremely complicated and opaque in operation.

But while we may be tempted to gloss over the presence of AI in complex systems, it is vital that we remain vigilant to ensure such systems have net positive effects.

Speaking of vigilance…

While the AI used in hiring is becoming more civilized and regulated, the broader world of AI techniques continues to advance at a daunting pace.

AI has helped in the development of nuclear fusion, and of course, it is front and center in OpenAI’s ChatGPT and Google’s newly announced competitor, Bard.

No chatbot is yet sentient, but you could be forgiven for thinking otherwise. Ask these bots a few complicated questions and you will be amazed at the coherence, structure, and general correctness of their response.

Given all this, it is vitally important that we continue to push for transparency and controls on AI tools.

Over time, they will eat away at human responsibilities and autonomy, and they have already proven they can easily serve as bias vectors in the absence of careful checks and balances.

Despite the complexity in the AI landscape, the reality is that many AI techniques and uses will be more civilized, both in terms of legality and process integration.

Do not be lulled to sleep by the apparent assimilation of AI into our world, for the reality is just the opposite.