Drawbacks with algorithmic management have come under media scrutiny, with three, recent articles pointing out that the workings of the underlying software and its status as valuable intellectual property are routinely leaving workers disillusioned.
In the most analytical of the pieces, the Harvard Business Review explores a study of US Uber drivers that aimed to find out what freelancers hate about algorithmic management.  It says: “Uber drivers, as well as other gig economy workers such as courier and delivery workers at Postmates and Deliveroo, are demanding more transparency about the allocation of jobs, the compilation of their ratings, and their payment structure.
“However, companies such as Uber argue they can’t reveal the secret recipe of their algorithms to competitors. Furthermore, recent advances in AI and machine learning mean that algorithms can now learn and dynamically adjust to any given environment, allowing for the automation of more sophisticated tasks (such as managing the workforce). But the more sophisticated these algorithms get, the more opaque they are, even to their creators.”
Penned by two of the study’s authors, the HBR piece notes that the drivers’ three biggest sticking points with algorithmic management are:
- the constant surveillance required to feed Uber’s algorithm;
- the lack of transparency on how the algorithm works, compared to how much it knows about them, and
- feelings of loneliness and isolation stemming from management-by-software, contributing to a general feeling of dehumanisation.
In a second piece, published by The Guardian, UK Uber driver James Farrar – head of the private hire drivers branch of the Independent Workers of Great Britain (IWGB) union – picks up on the transparency issues spawned by information asymmetries, saying: “[Algorithms] do collect an awful lot of information. My concern with it is … we should have access to the data, and understand how it’s being used.” 
In the third piece, at Personnel Today, employment lawyer Jonathan Rennie explores a recent interim report from the Centre for Data Ethics and Innovation on bias in algorithmic decision making, noting: “The report highlights that decision-making processes that are driven by algorithms can share some of the same vulnerabilities as a human decision-making process. One issue is that the data or evidence on which decisions are made may be biased, because the people writing the algorithms allow their own prejudices to creep into the system.” 
Is algorithmic management too inherently flawed to create real and lasting efficiencies?
The Institute of Leadership & Management’s head of research, policy and standards Kate Cooper says: “The idea that management-by-algorithm hands over the reins to some sort of higher intelligence, and that all the inefficiencies and biases that emerge when humans organise allocations and rotas suddenly vanish, is simply not credible. It is well known that historical data provided by humans, based on decisions that humans have made, informs and shapes the foundations upon which AI tools come up with their own decisions.”
Cooper argues: “It makes sense for Uber drivers to know how they are being rated and ranked – and I suspect that if the algorithm’s reasoning came to light, we would find biases and curiosities within the system. But we must remember that this stuff is all really new. Just think back to the very earliest computer models that schools used for timetabling. Often you would find that people were timetabled to be in a room that was either in a different building, or a very long walk from, the room they were in before, with no time factored in for getting from A to B. Yes, that’s a crude example – but it highlights the rigidity that often blights digital decision making.”
She notes: “Of course, adjustments are made over time, so AI technologies with HR and performance impacts will get better and become more efficient – and it’s in our interests to ensure that happens. I recently listened to a radio interview with a healthcare manager who was about to head into work and manually organise the rota for more than 100 midwives and support staff. I thought immediately that it sounded like a terrifically complex task for a human to address, and one that would be so much more straightforward for a computer to handle. So, let’s embrace these tools by all means – but let’s also ensure that we learn how to make them work for the benefit of everyone they affect.”
She adds: “The key factor for ensuring that we will learn how to use these tools in the best and fairest way is transparency. How are algorithms arriving at their ratings and rankings? What are the protocols that are governing their decisions? And how could they be refined?”