Live
- Call to inculcate habit of reading books
- 63% parents give milk to their kids to maintain intake of calcium
- Maoist Leader Manjula Surrenders in Warangal, Receives ₹20 Lakh Reward
- Kartika Purnima celebrations fervour marks Telugu States, devotees flock to shiva shrines
- YSRCP alleges meagre fund allocations for Super Six schemes
- Telangana CM Reviews Plans for First Anniversary of State Government
- Vijayawada: Kindness Day celebrated
- Rajamahendravaram: Students advised to set clear goals
- Digital locker facility now available at Namma Metro stations
- Swarna Vaijayanthi Mala presented to Tirumala god
Just In
Facebook Disputes Report Their AI Fail To Detect Hate Speech Or Violence Regularly
The Wall Street Journal said in a new report that Facebook's artificial intelligence is not consistently successful in removing the objectionable content.
Facebook Vice President of Integrity Guy Rosen wrote in a blog post-Sunday that the prevalence of hate speech on the platform had dropped by 50 percent in the past three years, and that "a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress" was false.
"We don't want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it," Rosen wrote. "What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions."
The post appeared to be in response to an article Sunday in the Wall Street Journal, which said that Facebook employees tasked with keeping offensive content off the platform do not believe the company can reliably detect it.
The WSJ report states that internal documents show that two years ago, Facebook reduced the time human reviewers focused on hate speech complaints and made other adjustments that reduced the number of complaints. That, in turn, helped create the appearance that Facebook's artificial intelligence had been more successful in enforcing company rules than it actually was, according to the WSJ.
A team of Facebook employees discovered in March that the company's automated systems were removing posts that generated between 3 and 5 percent of hateful views on the social platform, and less than 1 percent of all social media. content that violated its rules against violence and incitement, the WSJ reported.
But Rosen argued that focusing on content removals alone was "the wrong way to look at how we fight hate speech." He says the technology to remove hate speech is just one method Facebook uses to fight it. "We need to be confident that something is hate speech before we remove it," Rosen said.
Instead, he said, the company believes that focusing on the prevalence of hate speech that people actually see on the platform and how it reduces it using various tools is a more important measure. He claimed that for every 10,000 views of content on Facebook, there were five views of hate speech. "Prevalence tells us what violating content people see because we missed it," Rosen wrote. "It's how we most objectively evaluate our progress, as it provides the most complete picture."
But internal documents obtained by the WSJ showed that some important pieces of content were able to evade Facebook's detection, including videos of car accidents showing people with graphic injuries and violent threats against trans children.
The WSJ has produced a series of reports on Facebook based on internal documents provided by whistleblower Frances Haugen. She testified before Congress that the company was aware of the negative impact its Instagram platform could have on teens. Facebook has disputed the information based on internal documents.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com