
- Select a language for the TTS:
- UK English Female
- UK English Male
- US English Female
- US English Male
- Australian Female
- Australian Male
- Language selected: (auto detect) - EN
Play all audios:
For years now, the bosses of big tech firms have been masters of the universe. Top graduates wanted to work for them. Politicians wanted to be pictured with them. Everybody wanted to use
their services. But something has changed recently. After a string of scandals, big tech is increasingly viewed with suspicion, and politicians are pushing back accordingly. This week, the
UK government launched its Online Harms White Paper, bragging that it was unveiling ‘tough new measures to ensure the UK is the safest place in the world to be online.’ ‘The era of
self-regulation for online companies is over’ declared Digital Secretary Jeremy Wright. ‘We are forcing these firms to clean up their act once and for all’ promised Home Secretary Sajid
Javid. The proposals included a mandatory ‘duty of care’ for social media firms, a new independent regulator, and even making senior executives at social media firms personally liable for
harmful content on their platforms. All quite reasonable ideas on the surface. But the government’s macho language should have been the first warning sign that all was not well with the
proposals. A lot, too much, perhaps, will come down to the definition of what is harmful content. But who is it that defines what ‘harmful content’ is? A state appointed body doing so makes
me distinctly uncomfortable, even if it is ostensibly independent. We’ve had examples of this going wrong already – attempts to block content related to suicide ending up actually stopping
people accessing help, for instance. In the tricky digital sphere, attempts to do good can have unintended consequences. Going back to first principles, freedom of speech is a vital tenant
of a liberal, democratic society – that we limit at our peril. It is hard to welcome anything that is, at its core, an increase in state-sanctioned censorship. It would not be too difficult
for something published online that was unpalatable to the government to be branded fake news or harmful. We also need to see far more about how the regulator itself will be held to account
for its decision. Accountability works both ways. Despite these reservations, there can be little doubt that the current online free-for-all isn’t working. For many, the horrifying
live-streaming of the terrorist attack against mosques in Christchurch was the final straw. For others, it is how easy it is to access content relating to self-harm and suicide – brought to
the fore by the tragic suicide of Molly Russell in 2017, aged 14. Her parents attribute her death directly to content she had seen online. Then there is the growing presence of anti-vaxxers
online, leading to measles outbreaks, that Olivia Utley so rightly highlighted here. We know, too, that all sorts of radicalisation is taking place online, from recruitment of jihadists to
incitement of white supremacists, as well as child-abuse. That is even before we get into the realm of politically motivated fake news and the disruption that can cause. Despite all those
concerns, which are legitimate, freedom on the Internet is important and worth fighting for. it is fair to say, though, that the social networks themselves have done little to help
themselves in dealing with this. They’ve portrayed themselves as neutral online platforms, all the while profiting handsomely from content and communities that cause real-world harm. As
NSPCC CEO Peter Wanless put it: ‘For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content.’ The arrogant way
Facebook approached Parliamentary inquiries in the UK will have done little to endear them to politicians either. Facebook boss Mark Zuckerberg’s recent conversion to data privacy is far too
little, far too late. Tech firms needed to engage properly in the discussion, publicly and early on, instead of just making excuses about why they couldn’t deal with problems on their
services. They pretended it was all too hard, which simply isn’t true. Among their grand ideas, these firms aim to launch satellites to bring the Internet to the world’s poorest people. They
can make sure their most vulnerable users are safe. They have not, and one can only assume that is because they didn’t want to. However, their bad behaviour has been, though, it should not
be met with a bad response. I do not for one moment believe that the British government is planning a wholesale assault on freedom of expression online. However, they are going to have to
use the current 12-week consultation period to clarify the kind of content they are targeting, and allay many of the concerns their proposals have raised.