Social media companies were urged to accelerate efforts to tackle hate crime after facing tough questions from MPs over their handling of online abuse.
Senior executives from Twitter, Facebook and Google, which owns YouTube, were accused of profiting from violence and criticised for failing to remove offensive content as they appeared before the Commons’ Home Affairs Select Committee on Tuesday.
Yvette Cooper, the chairwoman of the committee, said algorithms used by the companies to suggest relevant content are helping to radicalise and groom users.
The session comes after Labour suggested social media companies should face “punitive” fines for failing to react quickly to offensive material that incites hatred and violence.
Press Gazette’s Duopoly campaign has raised concerns that Google and Facebook are undermining the news industry by taking most digital advertising. Some news publishers believe it is impossible to compete with the media giants because they obtain most of their content for free and take little legal responsibility for what appears.
Cooper said progress had been made in the companies’ approach to online hate crime, but told the witness panel: “The reason we are pressing you so hard about this is because it is so important.
“Because, in the end, this is about the kinds of extremism, whether that be Islamic extremism or far-right extremism, that can lead to very violent incidents.
“It is about the kinds of hate crime that destroys lives. It is about the kind of harassment and abuse that can undermine political debate and undermine democracy and you are some of the richest companies in the world.
“And that is why we need you to accelerate and we need you to do more.”
The hearing took place the day after Twitter suspended a number of accounts, including that of Britain First’s deputy leader Jayda Fransen, who gained notoriety when three anti-Muslim videos she posted were retweeted by US president Donald Trump.
Other accounts which appear to have been suspended for violating the new rules are @BritainFirstHQ and that of leader Paul Golding, @Goldingbf.
Facebook’s director of public policy, Simon Milner, told the committee that Britain First’s page on its platform was under review, with both online and offline behaviour being considered by moderators.
He said: “There are clearly issues with their page on Facebook. There’s been a number of pieces of content taken down.
“We are obviously reviewing it but we are very, very cautious about political speech.”
Cooper said she found it hard to believe enough was being done to tackle hate crime, after it emerged anti-Semitic and abusive tweets flagged by MPs at an earlier committee hearing had not been removed.
The chairwoman added that a series of violent tweets, including threats against Theresa May and racist abuse towards shadow home secretary Diane Abbott, were reported by her office but also remained on the site.
Addressing Twitter‘s Sinead McSweeney, vice president of public policy and communication for Europe, the Middle East and Africa (Emea), Cooper said: “I’m kind of wondering what we have to do.
“We sat in this committee in a public hearing and raised a clearly vile anti-Semitic tweet with your organisation.
“It was discussed and it is still there, and everybody accepted, you’ve accepted, your predecessor accepted, that it was unacceptable. But it is still there on the platform.
“What is it that we have got to do to get you to take it down?”
She added: “It’s very hard for us to believe that enough is being done when everybody else across the country raises concerns.”
Conservative MP Tim Loughton said the technology giants were inciting violence through inaction.
“This is not about taking away somebody’s rights to criticise somebody whose politics they don’t agree with,” he said.
“It’s about not providing a platform – whatever the ills of society you want to blame it on – for placing stuff that incites people to kill, harm, maim, incite violence against people, because of their political beliefs in this case.”
He added: “You are profiting, I’m afraid, from the fact that people are using your platforms to further the ills of society and you’re allowing them to do it and doing very little, proactively, to prevent them.”
Cooper said that algorithms used by Twitter, Facebook and YouTube were leading people to extremist material.
She said: “The police have said very clearly to us that they are extremely worried about online radicalisation and online grooming.
“Isn’t the real truth that your algorithms and the way in which you want to attract people to look at other linked and connected things is actually that your algorithms are doing that grooming and that radicalisation?”
Ms McSweeney said there were “balances and risks” in how content is flagged for recommendation to avoid censuring items, while Mr Milner said Facebook was actively working with communities to tackle radicalisation online.
He said: “I disagree that that’s what the technology is doing, but I do recognise we have a shared problem with the police, with yourselves, with civil society organisations on how do we address that person who may be going down a channel which can lead to them be radicalised, either on the left or on the right, and ultimately become extreme.”