The UK government is taking a hard line when it comes to online safety, moving to establish what it says is the world’s first independent regulator to keep social media companies in check.
Companies that fail to live up to requirements will face huge fines, and senior directors who are proven to have been negligent will be held personally liable. They may also find access to their sites blocked.
The new measures, designed to make the internet a safer place, were announced jointly by the Home Office and Department of Culture, Media and Sport. The introduction of the regulator is the central recommendation of a highly anticipated government white paper, titled Online Harms, published Monday in the UK.
The regulator will be tasked with ensuring social media companies tackle a range of online problems, including:
- Incitement of violence and the spread of violent (including terrorist) content
- Encouragement of self-harm or suicide
- The spread of disinformation and fake news
- Cyberbullying
- Children’s access to inappropriate material
- Child exploitation and abuse content
As well as applying to the major social networks, such as Facebook, YouTube and Twitter, the requirements will also have to be met by file-hosting sites, online forums, messaging services and search engines.
“For too long these companies have not done enough to protect users, especially children and young people, from harmful content,” UK Prime Minister Theresa May said in a statement. “We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe.”
Google and Facebook didn’t immediately respond to a request for comment.
The UK government is trying to decide whether to appoint an existing regulator to the job or to create a brand-new regulator position purely for this purpose. Initially the position will be funded by the tech industry, and the government is debating a levy for social media companies.
“The era of self-regulation for online companies is over,” Digital Secretary Jeremy Wright said in a statement. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”
The global move toward regulation
The measures announced by the UK on Monday are part of a larger global move toward greater regulation for big tech. The efforts originated in Europe, but have been gaining traction in the US, as well as with the leaders of tech companies, including Mark Zuckerberg and Tim Cook.
At a time of great political upheaval in the UK, the government is deciding to stand up to Silicon Valley tech companies, while hoping they’ll continue to create local jobs once the country has departed the EU. There are still some elements of the new regulatory process that are up for debate.
Damian Collins, chair of Parliament’s Digital, Culture, Media and Sport Committee, which recently published a report on fake news that branded social media companies as “digital gangsters,” said it’s important that the regulator has the power to launch investigations when necessary.
“The regulator cannot rely on self-reporting by the companies,” he said. “In a case like that of the Christchurch terrorist attack for example, a regulator should have the power to investigate how content of that atrocity was shared and why more was not done to stop it sooner.”
Vinous Ali, head of policy for industry body TechUK, welcomed the publication of the white paper, but said in a statement that some elements of the government’s approach remained “too vague” and that the government will need to be clear about exactly what it wants the regulator to achieve. The “duty of care” that the government believes social media companies have toward users is not clearly defined and open to broad interpretation, she added.
The Internet Association, which represents a long list of the world’s biggest tech companies, including Facebook, Google and Twitter, said it’s important that any proposals are practical for platforms to implement regardless of their size.
A spokeswoman for Twitter said in a statement that the company is committed to prioritizing the safety of users, pointing to more than 70 changes the platform made last year.
“We will continue to engage in the discussion between industry and the UK Government,” she said, “as well as work to strike an appropriate balance between keeping users safe and preserving the internet’s open, free nature.”
What about free speech?
Twitter is not the only entity that raised the issue of the internet’s openness in relation to the UK government’s plan.
Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, found two potential problems with the proposals.
“First, it raises the specter of government censorship,” he said in a statement. “Second, it shows how the failure of the social media companies to exercise more vigorous self-governance, especially when it comes to disinformation, has created the risk of government overreach.”
Digital rights groups reiterated the concern that overly harsh regulation of social media could lead to free speech and privacy violations.
“This is an unprecedented attack on freedom of speech that will see internet giants monitoring the communications of billions and censoring lawful speech,” Big Brother Watch said in a tweet.
Joy Hyvarinen, head of advocacy for Index on Censorship, said in a statement she’s concerned that “protecting freedom of expression is less important than the government wanting to be seen as ‘doing something’ in response to public pressure.”
“Internet regulation needs a calm, evidence-based approach that safeguards freedom of expression rather than undermining it,” she added.
Before the proposals for greater regulation go any further, they face a vote in Parliament. During that time, it’s possible that elements of the plan released Monday could change.
https://www.cnet.com/news/uk-to-keep-social-networks-in-check-with-internet-safety-regulator/