It’s a case of ‘back to the drawing board’ for backroom techies at Amazon, as the online retail behemoth has reportedly scrapped an artificial intelligence tool that it developed for the process of filtering job applications. According to a 10 October Reuters piece, [1] the algorithmic CV-scanning software was junked after it proved to be inadvertently sexist.

With the technology under development since 2014, the report notes, senior Amazon staff discovered as early as 2015 that it “was not rating candidates for software developer jobs and other technical posts in a gender-neutral way”.

The report explains: “That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men – a reflection of male dominance across the tech industry. In effect, Amazon’s system taught itself that male candidates were preferable.”

Among the algorithm’s mistakes was a decision to penalise resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ Plus, it downgraded graduates of two all-female colleges. Reuters notes: “Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.”

Individuals familiar with the matter told Reuters anonymously that Amazon disbanded the development team in early 2017, after losing faith in the project. One of those people told the news agency: “Everyone wanted this holy grail. They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

While the retailer has not commented at length on the failed initiative, it confirmed that the system “was never used by Amazon recruiters to evaluate candidates”.

In its 2017 report Artificial Intelligence in HR: a no-Brainer, [2] PwC gave an upbeat assessment of AI tools’ growing presence within HR departments – but conceded that many HR professionals “are reluctant to embrace” such breakthrough technology. It added: “Some feel algorithms can never replace human empathy and intuition.”

Given Amazon’s travails in this field, are those doubters on to something?

The Institute of Leadership & Management's head of research, policy and standards Kate Cooper says: “One of the most eloquent critics of AI is the technology writer Audrey Watters. She very perceptively wrote: ‘AI is not developed in a vacuum. AI isn’t simply technological: it’s ideological. So when we talk about the future of AI and how AI might threaten our ability to address social inequalities and our ability to organise to oppose power structures, we must remember that AI reflects beliefs and practices that are already in place.’ [3] In other words, the performance of AI tools absolutely depends upon what humans have taught them.

“AI doesn’t come from another planet, or from a society of machines that is free of bias. It is shaped and underpinned by the same beings who create every other code by which we live – and that’s us. So it’s strange to me that we are surprised when certain use cases reveal that AI has issues with bias. We should be far more surprised when it doesn’t. “

Cooper points out: “in the vast majority of cases, AI tools’ decisions are predicated upon their understanding of historical data. That’s how machine learning works: the system must first digest a corpus of information that has been created and collated by human beings. The AI tool then proceeds with the task it has been given on the basis of patterns it has detected in that volume of data. Those patterns effectively provide the software with assumptions. So if you put in, say, the profiles of 100 successful leaders, unless you went to huge lengths to ensure that those profiles were representative, diverse and inclusive, the system will tack towards the most obvious generalities that stand out from the material you have fed it.”

Cooper notes: “algorithms are made by people who have been brought up around the same social codes with which we all, from time to time, grapple. And those innovators will typically be approaching AI challenges from the angle of computer science, rather than social science. It is unlikely that they those individuals would have been urged to reflect upon their own unconscious biases.”

She adds: “my understanding of the current shape of AI is informed by a group of subject-matter experts that I met at last year’s Future of Learning Conference in Iceland. They said that AI is far away from understanding human emotions from cues such as changes in vocal registers or shifts of body language. So we can speculate that, in an alternative version of Amazon’s experiment, once the company has overcome the first hurdle of collating and feeding in a more representative sample of resumes, an element of human input would be required to fine-tune the results. And in my view, references would certainly make a valuable addition to the equation.”

For further thoughts on HR matters, check out these learning resources from the Institute

Source refs: [1] [2] [3]
 

Like what you've read? Membership gives you more. Become a member