Home Technology Facebook Pushes Back Against Report That Claims Its AI Sucks at Detecting Hate Speech

Facebook Pushes Back Against Report That Claims Its AI Sucks at Detecting Hate Speech

0
Facebook Pushes Back Against Report That Claims Its AI Sucks at Detecting Hate Speech

Image for article titled Facebook Pushes Back Against Report That Claims Its AI Sucks at Detecting Hate Speech

Photo: Carl Court (Getty Images)

On Sunday, Facebook vice chairman of integrity Guy Rosen tooted the social media firm’s personal horn for moderating poisonous content material, writing in a blog post that the prevalence of hate speech on the platform has fallen by practically half since July 2020. The put up gave the impression to be in response to a sequence of damning Wall Street Journal reports and testimony from whistleblower Frances Haugen outlining the methods the social media firm is knowingly poisoning society.

“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress,” Rosen mentioned. “This is not true.”

“We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it,” he continued. “What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.”

He argued that it was “wrong” to evaluate Facebook’s success in tackling hate speech primarily based solely on content material removing, and the declining visibility of this content material is a extra vital metric. For its inner metrics, Facebook tracks the prevalence of hate speech throughout its platform, which has dropped by practically 50% over the previous three quarters to 0.05% of content material considered, or about 5 views out of each 10,000, in accordance with Rosen.

That’s as a result of on the subject of eradicating content material, the corporate typically errs on the facet of warning, he defined. If Facebook suspects a bit of content material — whether or not that be a single put up, a web page, or a complete group — violates its rules however is “not confident enough” that it warrants removing, the content material should still stay on the platform, however Facebook’s inner methods will quietly restrict the put up’s attain or drop it from suggestions for customers.

“Prevalence tells us what violating content people see because we missed it,” Rosen mentioned. “It’s how we most objectively evaluate our progress, as it provides the most complete picture.”

Sunday noticed additionally the discharge of the Journal’s latest Facebook exposé. In it, Facebook staff instructed the outlet they had been involved the corporate isn’t able to reliably screening for offensive content material. Two years in the past, Facebook reduce the period of time its groups of human reviewers needed to give attention to hate-speech complaints from customers and diminished the general variety of complaints, shifting as a substitute to AI enforcement of the platform’s rules, in accordance with the Journal. This served to inflate the obvious success of Facebook’s moderation tech in its public statistics, the workers claimed.

According to a earlier Journal report, an inner analysis crew present in March that Facebook’s automated methods had been eradicating posts that generated between 3-5% of the views of hate speech on the platform. These similar methods flagged and eliminated an estimated 0.6% of all content material that violated Facebook’s insurance policies towards violence and incitement.

In her testimony earlier than a Senate subcommittee earlier this month, Haugen echoed these stats. She mentioned Facebook’s algorithmic methods can solely catch “a very tiny minority” of offensive materials, which remains to be regarding even when, as Rosen claims, solely a fraction of customers ever come throughout this content material. Haugen beforehand labored as Facebook’s lead product supervisor for civic misinformation and later joined the corporate’s risk intelligence crew. As a part of her whistleblowing efforts, she’s offered a trove of inner paperwork to the Journal revealing the interior workings of Facebook and the way its personal inner analysis proved how poisonous its merchandise are for customers.

Facebook has vehemently disputed these studies, with the corporate’s vice chairman of world affairs, Nick Clegg, calling them “deliberate mischaracterizations” that use cherry-picked quotes from leaked materials to create “a deliberately lop-sided view of the wider facts.”


#Facebook #Pushes #Report #Claims #Sucks #Detecting #Hate #Speech
https://gizmodo.com/facebook-pushes-back-against-report-that-claims-its-ai-1847881567