Social networks finally accept they are not immune to responsibility
It is simple to portray it as a debate between free speech absolutists and hand-wringers demanding that something must be done; between the two lie challenges in how regulation of ubiquitous transnational networks can work.
The removal of President Donald Trump’s social media privileges has revived the debate around social media regulation and what responsibility the multi-billion dollar entities should take for the material they host.
The answer at the moment is patchwork regulation in countries where users reside. To understand why the arguments are so polarised it’s important to start with how the networks grew up. In the US social networks are protected from liability for content by s230 Communications Decency Act, a 1996 federal law to promote emerging internet businesses that predates social media as we currently know it, and states flatly they “shall not be treated as the publisher”. This ‘legal shield’ throughout their lives is why social networks find the argument so simple and informs their stance around the world: their home jurisdiction tells them they’re a platform with no liability for content.
It is why the approach of other jurisdictions has been challenged so strongly by the networks, whether how they accrue – eventually – liability for defamatory content in England or the EU’s requirement that search results comply with its data protection laws with the right to deletion.
Time for change?
The regulatory attitude is changing though. In the US 2020 saw renewed debate on amending the protections, with each of the Department of Justice, the US Senate, an array of Presidential candidates and even a mooted Presidential Executive Order toying with walking back the absolute protection of s230. The debate on online harms has grown elsewhere, with the UK bringing forward proposals for legislation in some form at least too.
It is easy to view the move from social networks to remove Trump content to be a classic example of an industry facing fresh regulation acting to argue the status quo works. Easy and not necessarily wrong. Perhaps just as likely though is the realisation that being seen to have a supporting role in attacks on democracy itself is not a good look from a corporate reputation perspective.
The problem with political speech
It looks almost inevitable that fresh regulation will follow in many jurisdictions, but that opens the hardest aspect of what regulation looks like. The difficult truth is that political content is hard to regulate because if politicians are prepared to say things so outrageous social networks feel they must be deleted, those same politicians are likely to be willing to amend the regulation in retaliation.
It is no coincidence that the strong actions against Trump and his assault on democracy only came after President-Elect Biden was confirmed by the joint session in the US Capitol, nor is the liberal treatment granted to entrenched regimes in commercially valuable territories encouraging either. You need only look at access to social networks in China and Iran to see where displeasing leaders eventually leads.
With the introduction of content warnings in the last year – if not consistency in which obnoxious political speech is deleted – there are signs that social networks are accepting they cannot remain immune from responsibility forever, it’s just a shame it has taken a violent assault on the US Capitol for them to realise it. Likewise just because political speech is hard to regulate, it does not mean social networks should not be regulated at all.
Plenty of content from fraud, to hate speech to dire material inciting self-harm remains online, and the networks are fully aware. Moves to regulation should focus on specific types of harm that can be prevented. It’s hard to shake the belief that if social networks had been incentivised to innovate as creatively to take out harmful content as they have been to find ways to keep eyes on screens, a lot of problems could have been avoided.