In a bid to tackle growing concerns about artificial intelligence (AI), major developments are underway to scrutinize AI models for bias and copyright issues. In 2025, red teaming events will assess AI models for Islamophobia and online harassment against women, following the increasing public anxiety over AI's ethical implications. This initiative comes amidst a backdrop of legal battles and declining public confidence in AI technologies.
Red teaming, a strategic practice employed by major AI companies to identify flaws in their models, is gaining traction but remains largely unavailable for public use. In 2023, a significant red teaming exercise involving 2,200 participants received backing from the White House, highlighting the importance of addressing AI safety concerns. However, public skepticism persists. A Pew Research study shows that more than half of Americans express more concern than excitement about AI, a sentiment echoed globally according to the World Risk Poll.
The year 2024 marked a turning point as public awareness of AI's impact surged. Notably, a class action lawsuit was filed in California against Nvidia in March 2024, alleging the use of copyrighted work to train its AI platform NeMo. In another high-profile incident, actress Scarlett Johansson sent a legal notice to OpenAI over concerns that its ChatGPT voice closely resembled hers. Additionally, The New York Times initiated legal proceedings against OpenAI and Microsoft in December 2023 for copyright infringement.
Amidst these challenges, there is a burgeoning movement advocating for a "right to repair" for AI. This would empower users to conduct diagnostics, report anomalies, and ensure timely resolution by companies. This initiative aims to restore public trust and address the growing trend of individuals and organizations rejecting unsolicited AI interventions in their daily lives.
Organizations such as Humane Intelligence are spearheading efforts to develop red teaming exercises that scrutinize AI for discrimination and bias. By collaborating with nontechnical experts, governments, and civil society organizations, they aim to create more accountable AI systems. Furthermore, the law firm DLA Piper employs red teaming with legal professionals to ensure AI compliance with legal frameworks.
Leave a Reply