Facebook-owned company seeking to stem spread of misinformation by removing dubious posts from searches
Instagram is adding an option for users to report posts they think are false, the Facebook-owned photo-sharing site has announced, as tries to stem misinformation and other abuses.
Results rated as false are removed from places where users seek out new content, such as Instagram’s “explore” tab and hashtag search results.
Facebook has 54 fact-checking partners working in 42 languages, but the program on Instagram is being rolled out only in the US.
“This is an initial step as we work towards a more comprehensive approach to tackling misinformation,” said Stephanie Otway, a Facebook company spokeswoman.
Posting false information is not banned on any of Facebook’s suite of social media services, but the company is taking steps to limit the reach of inaccurate information and warn users about disputed claims.
Facebook started using image detection on Instagram in May to find content debunked on its flagship app and also expanded its third-party fact-checking program to the app.
Instagram has largely been spared the scrutiny associated with its parent company, which is in the crosshairs of regulators over Russian attempts to spread misinformation around the 2016 US presidential election.
But an independent report commissioned by the Senate select committee on intelligence found that it was “perhaps the most effective platform” for Russian actors trying to spread false information since the election.
Russian operatives appeared to shift much of their activity to Instagram, where engagement outperformed Facebook, according researchers at New Knowledge, which conducted the analysis. “Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” they said.
It has also come under pressure to block health hoaxes, including posts trying to dissuade people from getting vaccinated.
In February Instagram said it would ban graphic self-harm images after the death of British teenager Molly Russell. It came in response to a tide of public anger over the suicide of the 14-year-old, whose Instagram account contained distressing material about depression and suicide.
In July the UK-based charity Full Fact, one of Facebook’s fact-checking partners, called on it to provide more data on how flagged content is shared over time, expressing concerns over the effectiveness of the program.