Amazon plans to be more proactive about removing websites and services from its cloud computing platform AWS, which is used by the likes of Netflix, Fox, and ITV.
A new team of experts will monitor and remove websites and services in violation of its terms of service, including those promoting violence, Reuters first reported.
This is a move that is likely to renew the debate about how much power large technology companies should have to restrict free speech.
Amazon made headlines last week for shutting down a website hosted on AWS that featured propaganda from Islamic State that celebrated the suicide bombing that killed an estimated 170 Afghans and 13 US troops in Kabul.
It wouldn’t remove a single message in a social media app, or a video on a website, but rather would remove the entire website from the internet.
Speaking to MailOnline, Jake More, cyber security specialist at ESET said: ‘They own a powerful market share in server space which essentially means this new rule could censor the internet.’
It isn’t clear how the firm will go in terms of removing content from major players like Netflix or Twitter, although Amazon made waves in January when it kicked ‘free speech’ social messaging app Parler off the platform after the US Capitol riot.
AWS’s offerings include cloud storage and virtual servers and it counts major companies like Netflix, Coca-Cola and Capital One as clients.
It also hosts content for a number of media companies, including Reach PLC’s websites, Twitter, Facebook and broadcasters such as the BBC and Fox.
“Reuters’ reporting is wrong. AWS Trust & Safety has no plans to change its policies or processes, and the team has always existed,’ An AWS Spokesperson said.
Amazon has a 40 per cent share of the cloud computing market, with a wide range of companies using its data centres – from social media firms to broadcasters.
The decision to begin removing any content that violates terms of service, including those inciting violence, would make Amazon one of the most powerful arbiters of what type of content is allowed on the internet, experts predict.
AWS is the system behind websites like Twitter, Netflix and even some major news services – it hosts the code that allows users to interact with each other or watch TV.
Activists and human rights groups are increasingly holding not just websites and apps accountable for harmful content, but also the underlying tech infrastructure that enables those sites to operate – services like Amazon AWS and Microsoft Azure.
AWS already prohibits its services from being used in a variety of ways, such as illegal or fraudulent activity, to incite or threaten violence or promote child sexual exploitation and abuse, according to its acceptable use policy.
Amazon first requests customers remove content violating its policies, or have a system to moderate content uploaded by users.
If the firm cannot reach an acceptable agreement with the customer, it may take down the entire website – as it did in the case of Islamic State and Parler.
As part of this change, Amazon plans to develop a new approach towards content that is deemed misinformation – further fuelling the concerns over free speech.
‘Moderating and filtering online content always sounds proactive but the truth of it balances on a fine line between censoring the web and quashing free speech,’ said Jake Moore of ESET.
‘Whenever plans are put in place to remove content which violates the rules there can often be a backlash into what exactly is taken down.
‘Clearly there is a problem with harmful content on the internet which largely will be hosted by Amazon being one of the big players but when stringent new rules are created, teething problems are inevitable.’
Part of the new rules will set out at which point Amazon would step in to tell a customer, like Twitter or Twitch, to tackle the spread of ‘fake news’.
The new team within AWS does not plan to sift through the vast amounts of content that companies host on the cloud, but will aim to get ahead of future threats.
This will include being aware of emerging extremist groups whose content could make it onto the AWS cloud, a source close to AWS explained.
Amazon is currently hiring for a global head of policy on the AWS trust and safety team, which is responsible for ‘protecting AWS against a wide variety of abuse,’ according to a job posting on its website.
Better preparation against certain types of content could help Amazon avoid legal and public relations risk as lawmakers increasingly look to pin responsibility on the company hosting content, not just on the user uploading it.
‘If (Amazon) can get some of this stuff off proactively before it’s discovered and becomes a big news story, there’s value in avoiding that reputational damage,’ said Melissa Ryan, founder of CARD Strategies, a consulting firm that helps organizations understand extremism and online toxicity threats.
Cloud services such as AWS and other entities like domain registrars are considered the ‘backbone of the internet,’ but have traditionally been politically neutral services, according to a 2019 report.
But cloud services providers have removed content before, such as in the aftermath of the 2017 alt-right rally in Charlottesville, Virginia, helping to slow the organising ability of alt-right groups, the report revealed.
‘Most of these companies have understandably not wanted to get into content and not wanting to be the arbiter of thought,’ Ryan said.
‘But when you’re talking about hate and extremism, you have to take a stance.’
An AWS Spokesperson said: ‘AWS Trust & Safety works to protect AWS customers, partners, and internet users from bad actors attempting to use our services for abusive or illegal purposes.
‘When AWS Trust & Safety is made aware of abusive or illegal behavior on AWS services, they act quickly to investigate and engage with customers to take appropriate actions.
‘AWS Trust & Safety does not pre-review content hosted by our customers. As AWS continues to expand, we expect this team to continue to grow.’