Misinformation and questionable content capitalises on people’s insecurities and indiscriminate access to social media. This has led to incidents of lynching and communal violence in India as well as influencing elections.
Unlike traditional media, the original source of questionable content on web-based platforms is often unknown. This makes it harder to trace and take appropriate action against perpetrators.
What is Questionable Content AI
Questionable Content AI is a set of tools that can help businesses identify and remove potentially offensive content from their online content. This can save companies a significant amount of time and money by automating the process of screening and editing their videos for inappropriate content. At TrackIt, we recently helped a media and entertainment client implement an AI/ML pipeline that automates this process and helps them cut down on the amount of human hours they spend editing their content for distribution.
While these tools are useful, they should be used with caution because they can be subject to bias and errors. They can also cause issues with intellectual property,1 factual accuracy,2 and disinformation.3 Additionally, they can reinforce and expand echo chambers, which can lead to a skewed worldview.
In addition, it is important to note that generative AI tends to create content that is biased towards white, English-speaking cis men. As a result, it is essential to consider the limitations of these tools and incorporate diversity and inclusivity into their creations. This will ensure that the content created by these tools is more unbiased and accurate.
How does Questionable Content AI work
Questionable Content AI is a machine learning model that uses neural networks to identify inappropriate content. It can save companies a significant amount of time by automating the process of identifying questionable content in post-production editing workflows. Additionally, the system can reduce costs by reducing the need for human labor to manually tag and catalog graphic or explicit content examples. However, the system does have its limitations. For example, it can be prone to bias and must be trained with a large sample of data to minimize this risk. Furthermore, there are often “hidden” human costs associated with building an AI-training framework. These costs can include the salaries of people who work long hours to label and catalogue questionable content examples.
Despite these limitations, Questionable Content AI is an important tool for content creators to consider. However, content creators must be aware of the data that their AI tools are fed and ensure that their content is authentic, diverse, and inclusive.
This is especially important when using AI to create content for a broader audience. Currently, most online content is written by English-speaking cis white men, which leads to a skewed worldview. AI can help to create new content that is more diverse and inclusive, which could improve user engagement and brand reputation.
Who created Questionable Content AI
Questionable Content is a slice-of-life webcomic created by Jeph Jacques. It centers around romance, indie rock, little robots, and the problems people have. The comic is set in a future where intelligent AI (many of whom appear human) are a regular part of society. It is a highly popular webcomic with an active fanbase, though there is no organised fandom and only limited fanart.
The use of AI tools to generate content can lead to a number of issues including copyright concerns, data privacy, bias and lack of diversity. It is essential for businesses to understand how these tools are built and how they can be used responsibly. Yair Adato, co-founder and CEO of generative content startup Bria, said that feeding AI a healthy diet of knowledge and inputs from the beginning can help solve these issues.
What are the benefits of Questionable Content AI
Despite the fact that AI tools are not foolproof, they can help to reduce the time spent on content moderation. By identifying questionable content and flagging it for human review, businesses can save time and resources while still meeting distribution requirements.
By automating the process, it’s also possible to avoid exposing employees to potentially offensive material and protecting them from being exposed to hate speech or spam. However, it’s important to remember that AI is not immune to bias. In order to ensure that AI systems are ethical and don’t generate inappropriate or offensive content, it’s important to establish clear guidelines for the use of these technologies.
Yair Adato, co-founder and CEO of Bria, a generative image and video startup, recommends feeding AI with a “healthy diet” from the start to minimize copyright concerns, data privacy issues, and questions about bias and brand safety. This is particularly important as the industry moves to a model where AI generates its own content. This is likely to lead to a proliferation of similar content, the reinforcement of echo chambers, and the generation of racist and non-inclusive images. By carefully evaluating and editing the output of AI, it can be used to create valuable content that resonates with your audience.
Can Questionable Content AI detect fake news
Questionable content is defined as political or ideologically motivated online disinformation, fake news and hate speech. It has the potential to impact individuals and the collective in several ways including changing consumer attitudes, causing distrust and scepticism towards the electoral process and blocking educated political decision-making (Brown 2018).
People with limited education or awareness about social media are particularly vulnerable to questionable content as it capitalises on their sense of insecurity and a desire for validation of existing prejudiced ideology. This can be exacerbated when entertainment or sports celebrities, media personnel or politicians share the content. For example, the COVID-19 misinformation that circulated in India involved a video depicting an elderly fruit seller belonging to a minority faith being accused of sprinkling urine on bananas to be sold, and it spread rapidly after popular celebrities and media personnel shared it.
However, regulation of such content is a complex issue and it raises concerns over the limits and extent of government control over information systems and freedom of speech. As a result, many countries have been reluctant to introduce legislation that would punish the creators of questionable content (The Law Library of Congress 2019). In addition, repressive measures are likely to alienate the population and lead to distrust in the authorities.
Is Questionable Content AI reliable
Questionable Content AI is a new type of artificial intelligence that helps companies identify questionable content and prevent it from being distributed. The technology can scan words, images and videos to detect any harmful or inappropriate material. It can also help brands edit their content before it goes live to ensure that it is safe for their audiences.
Questionable content can be a result of online disinformation, fake news, hate speech, foreign encroachment in domestic affairs or misconstrued satire (Shin et al. 2018). These strategies can impact individuals and collectively – changing attitudes of consumers, creating a general scepticism towards electoral processes and blocking educated political decision-making, triggering communal riots and violence, altering the political landscape, marginalising certain classes or communities and damaging the economy (Roozenbeek and van der Linden 2019).
However, it is important to remember that no matter how reliable an AI tool is, it will only be as good as the data fed into it. Hence, it is critical to only use responsibly sourced content when training an AI algorithm. Also, a business should consider its legal liability and accountability when using an AI tool.
What industries can use Questionable Content AI
Businesses that need to identify questionable content can use AI-enabled automation to detect images and video. For example, Microsoft’s PhotoDNA uses an ML algorithm to create a unique digital fingerprint for each file, which can then be used to verify if newly-uploaded content matches the digital fingerprint of previously recognized files.
Other ML-based image and video detection methods such as Generative Adversarial Networks can help companies identify manipulated content like deepfakes. These types of videos can cause social dissonance and misinformation and may violate privacy and dignity rights.
Using ML-enabled automation for media compliance saves time and money by flagging questionable content for review. However, it’s important to work with true data experts that understand how to handle sensitive information. These experts can label ML models with relevant labels to ensure that the final tools are responsible for the task they are designed to perform.