Newsletter

Jul 25, 2023

A False Start for AI Safety Enforcement

Person walking forward

True AI Safety Requires Enforceable Legislation

This post was featured in the Reality Defender Newsletter. To receive news, updates, and more on deepfakes and Generative AI in your inbox, subscribe to the Reality Defender Newsletter today.

By now, you heard about how top AI executives visited the White House last week and subsequently drafted an agreement on how to voluntarily handle various pressing issues related to artificial intelligence. In the draft shared by the Biden administration and executives at major players in the AI space, the companies have agreed to:

  • Implement technical means for identifying AI-generated content (with a strong emphasis on watermarking).
  • Disclose AI systems’ strengths, weaknesses, and suitable uses.
  • Conduct rigorous security tests on AI systems before release.
  • Collaborate and share AI risk management practices industry-wide.
  • Boost cybersecurity and safeguard against insider threats.
  • Encourage external discovery and reporting of AI system vulnerabilities.
  • Prioritize societal risk research in AI, with an emphasis on bias and discrimination.
  • Use advanced AI systems to address major societal challenges.

As proponents of safe and ethical AI usage/implementation, the Reality Defender team welcomes the discussion of such topics in the mainstream, especially on such a truly grand scale. Unfortunately, the drafted agreement has many things holding it back from being truly impactful or even making a significant difference, particularly when it comes to preventing harmful deepfakes and AI-generated content.

First, the agreement is a promise and not legally binding. They are voluntary commitments with little to no enforcement, which potentially make them non-starters. At a time when people are concerned about AI’s encroachment on the job market, in all forms of content, and its tenuous grasp on truth (or usability in spreading disinformation), it is key to create actual legislation and turn these promises into enforceable laws. 

Next, any mention of the identification of AI-generated content in the agreement relies heavily on watermarking, which requires implementation in the tools created by these companies as well as their partners. Watermarking AI-generated content is great in theory, but its premise is heavily flawed. Only a small percentage of models creating AI-generated content will actually use it, while the vast majority will forgo it. Those models and creation tools without watermarking will allow bad actors to create deepfakes AI-generated content that can easily skirt past detection systems relying solely on checking for watermarks. (Reality Defender Co-Founder and CEO Ben Colman wrote extensively about deepfake watermarking earlier this year.)

At the same time, the tens of thousands of companies and tools not big enough to take part in this agreement have made no such promises. Nothing stops them from creating biased systems, forgoing safety checks, and so on. With no legislation in place or enforceable industry-wide measures, companies and tools that do the opposite of what was promised in the agreement will continue to appear at a regular clip.

Reality Defender exists to stop dangerous deepfakes and AI-generated content. We believe, support, and champion any measures taken to write into legislation the protection of all people from dangerous uses and effects of AI. We only hope to see this legislation pass and be enforced long before potentially witnessing far more damaging effects from the unchecked use, development, and deployment of AI-powered tools and models.

More News

  • ChatGPT is apparently good enough to pass the first year at Harvard with a 3.34 GPA. (Slow Boring)
  • Some actors are actually embracing deepfakes. (BBC)
  • Deepfake porn is improving, with companies like Unstable Diffusion behind their progress despite facing certain ethical and social issues. (TechCrunch)
  • Actor Brian Cox (Succession), when talking about the SAG-AFTRA strike, spoke candidly about the impacts of AI on actors. (Variety)
  • What impacts do AI-powered girlfriends have on the human psyche? (The Guardian)
  • Advertisers are cautiously turning to AI, despite the fact that it could upend their industry. (The New York Times)
  • Get ready for the onslaught of AI-written articles. (Note: This newsletter is entirely written by a human.) (Vox)
  • Following the above theme, Google is working with publishers by lending a tool that helps write articles. (The New York Times)
  • Pretty much anyone can now use Meta’s new LLaMA 2 model. (The Verge)
  • OpenAI allegedly held back GPT-4 image capabilities due to privacy concerns. (Ars Technica)
  • Apple is (supposedly) working on their own AI tool. (Bloomberg)
  • Thousands of authors are rallying against the use of their works to train LLMs without their consent. (Authors Guild)
  • The AI-generated influencers are here, armed with thirst traps. (Futurism)
  • James Cameron, father of Skynet, warned us back in 1984 about AI. (Variety)

Thank you for reading the Reality Defender Newsletter. If you have any questions about Reality Defender, or if you would like to see anything in future issues, please reach out to us here.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter