Keen observers of the tech industry have had a feast of decision-making contrasts to explore over the past fortnight, following Facebook boss Mark Zuckerberg’s controversial 17 October speech to students at Washington DC’s Georgetown University. [1]

In his address, Zuckerberg mounted a staunch defence of free expression on his network, calling misinformation a “nuanced” category of discourse.

His most contentious remarks focused on deliberately misleading political advertising: “I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true,” he said. “We don’t fact-check political ads … And if content is newsworthy, we also won’t take it down, even if it would otherwise conflict with many of our standards.” Zuckerberg added: “As a principle, in a democracy, I believe people should decide what is credible, not tech companies.”

Zuckerberg’s stance was widely seen as condoning the proliferation of mendacious content, at a time when the US is poised for a critical election year.

In an open letter in the New York Times, [2] screenwriter Aaron Sorkin – who charted the tech mogul’s rise in biopic The Social Network – pointed out the “irony” of the speech, given that the film’s script was heavily vetted by studio lawyers to avoid the prospect of a Zuckerberg lawsuit.

Sorkin told Zuckerberg: “I admire your deep belief in free speech … But this can’t possibly be the outcome you and I want, to have crazy lies pumped into the water supply that corrupt the most important decisions we make together … You and I want speech protections to make sure no one gets imprisoned or killed for saying or writing something unpopular, not to ensure that lies have unfettered access to the American electorate.”

However, the debate became even more interesting on 30 October when Twitter CEO Jack Dorsey published a series of tweets [3] announcing that his platform had banned political ads outright. He explained: “A political message earns reach when people decide to follow an account or retweet. Paying for reach removes that decision, forcing highly optimised and targeted political messages on people. We believe this decision should not be compromised by money.”

For example, he added, “it‘s not credible for us to say: ‘We’re working hard to stop people from gaming our systems to spread misleading info, buuut [sic] if someone pays us to target and force people to see their political ad… well… they can say whatever they want!”

Speaking on BBC Radio 4’s Today programme this morning, Elizabeth Braw – senior research fellow at leading security and defence think tank the Royal United Services Institute (RUSI) – said: “Social media networks clearly have a responsibility to keep their networks clean from disinformation – but even so, a lot of disinformation reaches ordinary citizens. And I think actually we all have a responsibility to think twice before we give credence to what we see or read, or even share that content.” She added: “If we look at the countries that are the most resilient against disinformation, the [top] country is Finland, where … information literacy is taught in schools. And then the UK is number ten. So, not too bad – but we can still clearly do a lot better.” [4]

What sort of lessons should leaders take away from the debate around the sharp contrasts between Facebook and Twitter’s policies?

The Institute of Leadership & Management’s head of research, policy and standards Kate Cooper says: “For any leader, this is all about the extent to which you trust the information with which you have been presented. And it highlights a series of questions that leaders should ask themselves every day, such as: where has this information come from? Is it the best evidence available? And is it helping to inform my decisions in a constructive way? That regular questioning of information is something we should all be prepared to undertake. It’s critical thinking, which is a cornerstone of many degree programmes and, as Braw suggests, should certainly also be on the school curriculum.”

Cooper notes: “Leaders shouldn’t regard the process of sourcing and referencing as some sort of academic box-ticking exercise, or an inconvenient addition to their paperwork. Getting into the habit of questioning the information we receive is a critical step towards sharpening the decisions we make. That doesn’t just apply to what Facebook and Twitter users must do in their consumption of social media – it’s incumbent upon everyone in our organisations who makes decisions on the basis of third-party information. Who has provided this data? How can I trust it? And is it of a dependably high quality?”

For further insights on the themes raised in this blog, check out the Institute’s resources on decision making

Source refs: [1] [2] [3] [4]

Like what you've read? Membership gives you more.  Become a member.

Image of Mark Zuckerberg testifying before the House Financial Services Committee last month courtesy of Aaron-Schwartz, via Shutterstock