A predominance of whiteness among real and fictional examples of artificial intelligence (AI) risks spawning a “racially homogenous” workforce of technologists who could end up baking bias into machine algorithms – that’s the message from scholars at the University of Cambridge.
In their new research paper The Whiteness of AI, Dr Kanta Dihal and Dr Stephen Cave of Cambridge’s Leverhulme Centre for the Future of Intelligence (CFI) highlight “the prevalent Whiteness of real and imagined intelligent machines” across four categories:
- humanoid robots;
- chatbots and virtual assistants;
- stock images of AI, and
- portrayals of AI in film and television.
Dihal and Cave argue that race and technology are “two of the most powerful and important categories for understanding the world as it has developed since at least the early modern period” – yet the entanglement of those categories “remains understudied”.
In the authors’ assessment, reasons for that shortfall of scholarly interest include “the lack of first- or secondhand accounts of the role of people of colour in the development and use of technology; persistent stereotypes about technology as the province and product of one particular racial group – White people; and the persistent tendency of members of that group, who dominate the academy in the US and Europe, to refuse to see themselves as racialised, or race as a matter of concern”. (Dihal & Cave, Journal of Philosophy and Technology, 6 August 2020)
In a statement, Dihal – head of the CFI’s ‘Decolonising AI’ initiative – notes: “One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard White middle-class English. Ideas of adding Black dialects have been dismissed as too controversial or outside the target market.”
The authors point out that one of the most visible, current examples of real AI is Hanson Robotics’ creation Sophia, who the United Nations named as its first-ever innovation champion. Sophia has been designed with a Caucasian skin tone and vocal mannerisms – which is also true of NatWest’s prototype ‘cyber-teller’ Cora, who we covered on News & Views in March 2018, and the robot interviewer Tengai, who we covered one year later.
Dihal notes: “Portrayals of AI as White situate machines in a power hierarchy above currently marginalised groups, and relegate people of colour to positions below that of machines. As machines become increasingly central to automated decision-making in areas such as employment and criminal justice, this could be highly consequential.”
She adds: “The perceived Whiteness of AI will make it more difficult for people of colour to advance in the field. If the developer demographic does not diversify, AI stands to exacerbate racial inequality.” (University of Cambridge Press Office, 6 August 2020)
What must leaders do to counteract white bias in AI and hardwire a diversity-based outlook into emerging tools?
The Institute of Leadership & Management’s head of research, policy and standards Kate Cooper says: “Figures that have surfaced from gender pay gap reporting and the Black Lives Matter movement show that these biases certainly exist. If leaders are to eliminate them, they must first of all take them seriously and make a conscious decision to counteract them. But if you look at the demographic of key decision makers in high-tech industries, taking that first step is going to be really difficult.”
She notes: “Again, if we reflect upon the data from gender pay gap reporting, one of the biggest lessons is that securing greater diversity simply takes more effort than carrying on with the same-old-same-old. So leaders must actively recognise that there is a problem in the first place. However, as someone who has attended lots of technology conferences over the past few years, and has spoken to industry decision makers in person as well as seeing them onstage, I can say that they don’t view diversity in their field as anything like as important as the progress of the technology itself.
“Indeed, diversity is regarded as something that can be sorted out later, once the industry has reached certain innovation benchmarks. But as the Cambridge research shows, diversity must in fact be integrated with the broader development process.”
Cooper adds: “At a time of widespread acknowledgement that discrimination and bias aren’t doing organisations any good in terms of performance, technology leaders must say, ‘Right, let’s pull together some working groups – have we got proper representation on these issues? Let’s make a huge effort to adjust the way we recruit into our industry.’ It’s a huge task – but as with any push to overhaul longstanding practices and procedures, once the conversation has started, the process has begun.”
Image of Hanson Robotics’ AI humanoid Sophia courtesy of FeelGoodLuck, via Shutterstock