UK fact-checking charity Full Fact has revealed Facebook told it to remove labels on an edited video of now-Labour leader Keir Starmer that was shared during the 2019 general election period.
Full Fact, which was paid £312,507 for its work on Facebook’s third-party fact-checking programme between July 2019 and November 2020, said in its second report on their collaboration that it was “surprised” by the platform’s decision.
- September 15, 2021
- July 21, 2021
- July 15, 2021
The charity, led by chief executive Will Moy, said there was ambiguity about how to apply Facebook’s policy that “posts and ads from politicians are generally not subjected to fact-checking” during the 2019 general election that was not resolved until September ahead of the US presidential race.
During the election campaign the Conservative Party edited footage of Starmer to make it look like he was unable to answer a question on Brexit.
Full Fact said the original video from the Conservatives was not in scope of its powers as it was posted by a political party, but that it did rate a number of reposted versions of the video shared by non-politicians and non-political pages.
However Facebook asked Full Fact to remove these ratings. The charity said: “We were surprised by Facebook’s decision.
“The video was, by its very nature, not direct and unedited speech, as per the guidelines at the time. Although the content was originally edited together by the Conservative party, the video itself didn’t have any Conservative branding on it. The video was still misleading without the accompanying text in the Conservatives’ tweet and Facebook post.
“This is another example of where greater discussion is needed on how fact-checkers should interpret and enforce the political speech policy.”
Full Fact published a fact-check of the video on its own website but deleted text at the bottom of the article giving the Facebook rating, as its reports of content identified through the programme usually carry.
Full Fact and other organisations are paid to rate the accuracy of content on the platform with labels which, after several changes, now comprise: “false”, “altered”, “partly false”, “missing context”, “satire”, and “true”.
Facebook reduces the distribution in its news feed of content which has certain of these labels applied.
Full Fact said labels such as “unsubstantiated” and “more context needed” would have been useful earlier in the Covid-19 pandemic when, for example, there was no consensus on how long the virus could survive on surfaces as research was still ongoing.
The charity also revealed false claims about ill effects from 5G were a small amount of its work before the pandemic but now make up a “significant amount” of what its staff see on social media.
It was “disappointed” that it took arson attacks following people making false links between 5G masts and Covid-19 for the Government and public health bodies to respond, considering it shared concerns about the risk of 5G conspiracy theories last year.
The charity also shared its “growing concern” that most major internet companies are trying to use artificial intelligence to scale up their fact-checking processes but none are doing so transparently with independent assessment.
One of its main recommendations to Facebook is that it implements greater transparency around its use of AI in matching similar claims and content.
“While Facebook has provided some detail on its machine learning work this cannot be done in isolation,” Full Fact said.
“Substantial effort is needed by all internet companies to provide transparency on the data that powers algorithmic decision making and its effects, intended or otherwise.”
In its first report on the service in July 2019 Full Fact urged Facebook to expand it to Instagram, which it did soon afterwards.
However Full Fact said third-party fact-checking can only be one part of an effective response to misinformation and disinformation and that Facebook and other platforms need “scrutiny and oversight” of their other decisions on areas like product design, advertising standards and rules for user behaviour.
“Our two main concerns continue to be transparency and scale,” the charity said.
It urged Facebook to invest in better technology to improve its claim matching, the “vital” process for mapping the spread of online misinformation which works by finding repetitions of problematic claims.
It called for more data on the number of shares of content to be shown in the fact-checking queue so they can understand what is going viral.
And it said publishers of content that is fact-checked should be given more information so they can appeal a decision more easily.
Keren Goldshlager, who works on Facebook’s fact-checking partnerships, said: “Fact-checking has a critical role to play in stopping the spread of misinformation on Facebook and across the broader internet.
“In the past year, we have grown our global network of fact checking partners to 80 organisations, working in 60 languages, fighting misinformation for critical events like elections and Covid-19. We know our efforts are working.
“From March to October of 2020, we labelled about 167m pieces of Covid-19 related Facebook posts, resulting in a 95% drop-off in click-through to the underlying false content.”