HomeScience & TechnologyFacebook AI: Confuse shooting videos with paintball games

Facebook AI: Confuse shooting videos with paintball games

Facebook CEO Mark Zuckerberg made an optimistic statement three years ago when he referred to his company's progress in artificial intelligence-backed automated (AI) surveillance tools. "By the end of 2019, we expect to have trained our systems to prevent the vast majority of problematic content from being detected," he said in November 2018.

Facebook AI

See also: Facebook: Contradicts claims that AI does not fight misinformation

But just in March, Facebook internal documents revealed that the company found that its automated AI surveillance tools were not working with high success rates, removing posts that were only responsible for a small fraction of hate speech and violence and incitement on the platform. . Posts removed from artificial intelligence (AI) tools accounted for only 3-5% of hate speech projections and 0,6% of views of violence and incitement.

Facebook takes a sample of posts, applies artificial intelligence tools to them, and then asks moderators to evaluate the accuracy of the AI. It then uses this fraction to estimate how much hate speech or violence and incitement is lost across the platform.

Different statistics

Facebook's internal view of artificial intelligence (AI) measuring tools seems far more pessimistic than it says to the public. The point is, what he discusses internally communicates in a completely different way to the public. In statements, Facebook revealed the percentage of hate rhetoric that AI discovered before reporting users, which is a very high number, 98 percent. The problem is that there are many instances where hate speech is not reported by users.

See also: Facebook: Will recruit 10.000 in the EU for metaverse

Company spokesman Andy Stone told the WSJ that the data about the removed posts did not include any other actions taken by the platform, such as reducing the reach of suspicious content. In this context, he said, content that violates the policy is declining in frequency and is what the company judges itself with.

Facebook has said it has improved on finding hate speech on its platform, claiming it removed 15 times more content in 2020 than in 2017. However, that number hides some key details as this statement is too much. general.

Hard to mention

Today, Facebook's artificial intelligence (AI) tools may attract more content before users report it, because two years ago, Facebook deliberately made it difficult for users to report. One side effect was that AI tools were now able to catch more posts before they were finally reported by users.

See also: Facebook: Photos of its original VR hardware

Artificial Intelligence (AI) Confusion

Facebook's internal documents reveal how far artificial intelligence measuring tools are from finding that moderators were easy to spot. Cockfighting, for example, has been misidentified by AI as a car accident. In another case, videos broadcast live by mass shooters were labeled by artificial intelligence tools as paintball games or car wash sounds.

Internal reports indicate that Facebook users would prefer the company to take a more aggressive approach to enforcing policies of hate speech and violence and incitement, even if this means removing more innocent posts. In a survey, users around the world said inaccurate content removal was one of their least common concerns, and said hate speech and violence should be the company's top priority. At USA, several users considered inaccurate content removal to be a matter for discussion, but hate speech and violence were still considered the top problem.

Source of information: arstechnica.com

Teo Ehchttps://www.secnews.gr
Be the limited edition.

LIVE NEWS