Disclaimer: We may earn commissions from purchases made through this site.

AI Rights and Questionable Content

The webcomic Questionable Content is a slice-of-life comic about romance, indie rock and little robots. It has a large fanbase and is often the subject of memes.

Missteps in the moderation of online content by tech giants have prompted politicians in Europe and the United States to push them to take more responsibility for illegal, sexually explicit and false material.

Use AI content to get more sales and leads! LEARN MORE

1. Can AI infringe upon content rights

The emergence of generative AI tools able to create images and music in the style of existing creators has raised interesting questions about copyright law. Specifically, it’s unclear who is responsible for infringement when an AI tool generates something that infringes upon the copyright of others.

This is partly because generative AI software is trained on a huge amount of potentially copyrighted materials without the permission of the original content owners. However, it’s not just copyright that could be infringed upon by these AI tools. Other types of IP rights such as design patents, trademarks and database rights are also at risk.

The current situation is that, despite the fact that generative AIs can be used by all kinds of creators (not just artists and writers) these tools tend to be populated with data that is overwhelmingly written by English-speaking cis white men. This skews the worldview of the software and contributes to the creation of similar content and echo chambers.

Some generative AI developers are taking a cautious approach. One such tool is Scenario, which offers game developers and artists bespoke image generators that can be custom-trained using their own data. As such, it’s unlikely that their output would infringe any third-party copyrights (although it might irritate Nick Cave). This type of approach may help avoid any significant infringement risks.

2. How are AI rights protected

AI rights aren’t just about copyright; they also cover data privacy, fair dealing and notice and explanation. EPIC works with lawmakers, industry, and others to ensure that these important civil and human rights are included in any legislation addressing AI.

In terms of copyright, an AI system that uses copyrighted works could infringe them unless those works are licensed, out-of-copyright or subject to a specific exception. This is particularly relevant for generative AI systems that use works to create new work, such as computer music or art.

It is also important to consider the impact of cultural and political biases on AI systems. These can be based on the assumptions and prejudices of humans who program them or by the people they interact with. For example, an AI chatbot developed by Microsoft became racist, sexist and anti-Semitic after just 24 hours of interactive learning with its human audience.

There is a growing movement toward developing an AI Bill of Rights that would protect people from systems that are unsafe or ineffective. These rights include the right to freedom from algorithmic discrimination, the right to privacy in the collection and processing of data, the right to understand how algorithms make decisions, and the right to access, review and modify AI systems.

3. What is the impact on content creators’ rights

Use AI to write faster! LEARN MORE

As AI content creation becomes more popular, it has the potential to impact the rights of original creators. For example, some AI tools have been used to create sexy pornography and other sexually explicit material. This has led to a rise in sex piracy, which is a serious issue that needs to be addressed.

AI tools can also be used to generate content that is racist and non-inclusive. This is a problem because it can prevent people from creating diverse and authentic content. It is important for creators to understand the data that is used by AI content tools and ensure they are not perpetuating biases.

Finally, AI tools can be used to violate content creators’ rights by flagging their content as a copyright violation. This can lead to content being censored or even being removed from platforms entirely. It is important for content creators to make a public disclaimer that their work is their property and they are the sole owner of it.

4. Are legal frameworks adapting to AI rights

Governments and regulatory bodies are beginning to work on legal frameworks that will govern the use of AI. These efforts are focused on establishing rules that will govern how AI systems can be used and ensuring that companies are held accountable for any harms caused by their systems.

Despite being a politically divisive issue, these efforts are generally well-intentioned and are often designed to avoid imposing new burdens on companies or consumers. However, these efforts may not go far enough to prevent companies from using their AI for biased or discriminatory purposes. Civil rights organizations have urged federal agencies to put civil rights and equity as a central consideration for their AI policy.

While Congressional actions like the RMF are not directly aimed at regulating the use of AI, they provide a strong foundation for future regulation. Similarly, the Blueprint for an AI Bill of Rights provides a set of guidelines that could be used to build regulatory frameworks. However, it is important to note that the blueprint only addresses government use of AI and does not address private-sector or commercial applications. It also excludes law enforcement and national security agencies from its precautionary guidance, which is a critical flaw in the framework.

5. Can AI-generated content violate copyrights

In general, copyrights protect original works of authorship created by human creators. This is the principle that governs the creation of paintings, books, and songs. However, the concept of AI-generated content is a new one that raises several questions about intellectual property laws and the role of humans in the creative process.

While some companies are using AI-generated images to promote their own products, others have criticized the practice as unfair competition and infringing on other creators’ copyrights. These concerns may be justified, considering that the AI often uses images in its training that are already protected by copyrights. Moreover, the algorithms used by AI may have been developed by other businesses, such as OpenAI, which owns the chatbot DALL*E.

While it is difficult to know how the law will ultimately play out in this area, businesses can take steps to mitigate this risk by evaluating the terms of use and content policy of any AI platforms they utilize. In addition, they should also carefully examine any third-party terms that are referenced or linked to the AI platform, as these could contain restrictions on commercial use. This way, users can be sure that any AI-generated content is appropriate for their purposes and will not infringe upon the rights of other creators.

6. What are the limitations of AI rights

The White House Blueprint for an AI Bill of Rights has some impressive goals, but the scope of its principles is limited. It covers hiring, credit evaluation, commercial surveillance, property valuation, and education technology, but leaves out critical topics such as law enforcement AI, generative art, and the use of data to discriminate against workers.

While these areas need more attention, the principles do help to articulate some key points. The first is that companies that design, program, and use AI systems must make sure they are complying with human rights standards. The second is that algorithms should be transparent, with clear explanations of their decisions. This can help people understand the reasoning behind decisions and hold them accountable.

The third principle is that AI should be designed to protect the most vulnerable. This includes ensuring that AI is not used for discriminatory purposes or to violate freedom of expression. Finally, the last principle encourages a system that allows people who have been harmed by an AI system to take legal action and seek compensation. This is a crucial step in ensuring that the developers of AI are held responsible for their products’ effects on people.

7. How can AI rights be enforced

While the Blueprint for AI Rights is a positive step, it remains to be seen how it will be enforced. Many agencies are working to address the issues raised by the blueprint, including the EEOC, which has published technical assistance on the impact of AI on employment decisions, and the DOJ, which is launching an investigation into algorithmic discrimination in healthcare. Additionally, the Department of Education has committed to ensuring that AI is used responsibly in classrooms, and HHS is preparing to release a rule that would prohibit algorithmic discrimination in healthcare decision-making.

Easily generate content & art with AI LEARN MORE

Finally, companies developing AI should be required to evaluate their products for human rights risks and impacts. These evaluations should include not only the underlying data, but also how AI programs leverage it to mitigate the risk of discriminatory outcomes. Furthermore, the federal government should consider ways to ensure that the AI industry is held accountable for its products and that the government has the power to impose penalties on companies that fail to comply with laws and regulations.

Questionable Content is a slice of life webcomic by Jeph Jacques about romance, indie rock, and little robots. It’s set in a near-future universe where sentient AI is commonplace, and humans have been relegated to a position of subservience.