Facebook CEO Mark Zuckerberg raised chuckles in the Twittersphere last week when he said that it’s easier for AI to detect a nipple than it is to determine whether speech is linguistically hateful. He was referring to the company’s ongoing struggle to keep coronavirus hoaxes, scam medical products and election-related misinformation off its site.
How does Facebook AI block content
Facebook has developed a variety of artificial intelligence software tools to spot banned content on the site. These include image recognition, text analysis and language detection, and a system designed to flag posts that violate its hate speech policy.
The company has also used AI to spot images that might violate its privacy rules and to block coronavirus product ads. It has also created a system that analyzes the contents of photos to help visually impaired users access their content more easily.
Facebook’s automated AI tools detect problematic content mainly in seven areas: nudity, graphic violence, terrorist content, hate speech, spam, fake accounts and suicide prevention. For example, for posts that feature nudity, the platform uses computer vision to identify objects in the image and then compares those to a database of known inappropriate content. If the software identifies an object that is against its rules, it will either be taken down or put behind a warning screen.
The company says its automated systems are five times more effective at detecting content that violates its rules than they were just one year ago. However, it is important to note that this technological progress doesn’t address the deeper problems that plague the company.
Why is Facebook AI blocking content
Since last year, Facebook’s use of AI to block content has been under scrutiny from both regulators and its own employees. Specifically, it’s been accused of being used as private censorship that could lead to over-censorship and harm users’ ability to express themselves freely.
The company says that most of the content it removes for violating its community standards is discovered by AI and not by humans. It also claims that it has improved its machine learning systems, enabling them to detect more types of violations. For example, it said its proactive detection systems removed 22.5 million pieces of hate speech on Facebook and Instagram in the second quarter, including posts that praised terrorist acts or pushed fake preventative measures for COVID-19.
However, a series of internal Facebook documents published in March by The Wall Street Journal suggest that the company’s artificial intelligence still has a long way to go to catch terrorist content. The documents show that its AI has failed to consistently identify first-person shooting videos, racist rants and even, in one case, the difference between cockfighting and car crashes. It’s a sign that, despite Zuckerberg’s public pledges and the many media exposes about its flaws, the company still doesn’t know how to reliably screen for offensive content.
Can users bypass Facebook AI blocks
Facebook has begun to rely on artificial intelligence systems to help determine which content will be brought before its full-time human moderators for a decision on whether it should remain up or removed. The company says its AI systems are now five times better at catching content that violates its terms of service than they were last year.
But even with thousands of human moderators and powerful algorithms at its disposal, objectionable content still slips through the cracks. As a result, some users are turning to cloaking software to hide links that lead to pornography, fake miracle fitness supplements and earn-money-quick scams from Facebook’s moderation systems.
This bypassing technique involves adding text or blurring explicit details to images, making the content harder for moderators to detect. It can also involve combining photos to create new images that don’t appear to violate the terms of service.
For small businesses that rely on Facebook to reach customers, these erroneous bans can cost them tens of thousands of dollars in lost revenue. In a group of advertisers that Business Insider interviewed, many said they had experienced slow and opaque customer support from the social media giant when trying to get erroneous content or accounts unblocked.
What types of content are blocked
Facebook employees have sounded alarms for years about the limitations of the company’s tools to root out or block speech that violates the social network’s policies. The cache of internal documents seen by Reuters shows how the company has struggled with algorithms that don’t recognize different languages or comprehend dialogue, as well as other problems.
For example, when an AI-powered chatbot called Galactica was asked about a scientific topic, it offered responses that contradicted each other or were so vague that they seemed false. The program was trained on 48 million science papers and yet still couldn’t understand the subject.
The company also struggles to find workers who have the language skills and knowledge of local events needed to identify objectionable content posted by users in many countries, according to the documents. The company has also struggled with software that identifies specific types of policy violations, like hate speech or spam.
The cache of internal documents showed how the company has been trying to improve these systems. It has invested in software that identifies broader categories of misinformation, such as COVID-19-related disinformation, and is working on “similarity matching,” which allows an automated system to recognize posts that are similar but not identical to a previously removed one.
How effective is Facebook AI blocking
Although Facebook has touted the effectiveness of its AI for content moderation, it remains vulnerable to mistakes. It’s a difficult needle to thread: Facebook wants to catch and stop every fake account, but it also needs to avoid blocking legitimate users in error.
The company has made big strides in reducing false positives, or posts that get removed in error, by using similarity matching to find new posts that might violate its policies. For example, a post that warned people to beware of fake medical products could be blocked by Facebook’s AI if someone else posted the same warning with a different language and image.
But opportunistic content creators are constantly finding new ways to trick or confuse the algorithm. In addition to reusing images and text, they’re using covert tactics such as changing the way they spell certain words or phrases.
As a result, the company says it’s now more accurate than ever, but it’s still a long way from being able to handle the volume of coronavirus hoaxes, scam medical products and election misinformation that will flood the site in coming months. And many advertisers tell Business Insider they’ve been caught in the company’s dragnet despite not violating any policies.
Who developed Facebook AI blocking
Many of the same companies that rely on AI-based image recognition for security and marketing purposes also employ it to screen their websites, messages, and content. They face the same dilemma as Facebook does: If they fail to detect all of the policy violations, others will be hurt, but if their criteria is too loose, other content may wind up being blocked without a clear reason.
Facebook has been under pressure to better address the problem of Covid-19 misinformation and election-related disinformation, and CEO Mark Zuckerberg promised lawmakers in 2018 that it would move to more reliance on AI moderation. However, critics say the company hasn’t yet proven its AI is up to the task of filtering out malicious material.
In the past, Facebook has relied on low-paid contract workers to review suspicious posts and accounts, but those employees are often subjected to stress and mental health issues as they sift through disturbing images, videos, and conversations. In addition, the technology is difficult to scale and has been susceptible to glitches. Many small and medium businesses claim to have been caught up in a Facebook dragnet of blocked content, losing tens of thousands of dollars in revenue as they fight to get erroneous blocks reversed.
What are the consequences of blocked content
In some cases, Facebook will block a person from viewing content that violates the social network’s policies. This can happen if the post is spam, inappropriate or illegal. When this happens, the user will receive a message on Facebook that says “No content available” or something similar.
While Facebook has made strides in its AI systems, many experts say it is still not up to the task of detecting political propaganda and other harmful content. Last week, Facebook CEO Mark Zuckerberg faced questions from Congress about his company’s inability to prevent the spread of hate speech, terrorism propaganda and other false information on its site.
For example, Facebook’s AI system is unable to detect hate speech in some languages. According to documents obtained by Reuters, in 2020, the company did not have screening algorithms, known as classifiers, for the Myanmar language of Burmese and Ethiopian languages like Oromo and Amharic. These gaps can allow abusive posts to go unnoticed and lead to real-world harm.
In addition, AI systems can also make mistakes in identifying what types of content are allowed or prohibited on the site. For instance, in one case, an ad for onions was blocked by Facebook because the AI system mistakenly believed it violated its policy against nudity.