As thousands of hackers gather at Def Con 31 in Las Vegas, the White House is supporting an event aimed at exposing potential flaws in artificial intelligence (AI) models. The event will focus on large language models, particularly chatbots like OpenAI’s ChatGPT and Google’s Bard. Organized by Dr. Rumman Chowdhury, chief executive of Humane Intelligence, the competition aims to identify problems in AI systems and create independent evaluations. While the participating companies, including Meta, Google, OpenAI, and Microsoft, acknowledge the potential for issues, the event serves as an opportunity to challenge their models against hackers’ attempts to find flaws. Over two-and-a-half days, 3,000 hackers will work individually to uncover vulnerabilities in eight AI models. The hackers will not know which company’s model they are testing, and successful challenges will earn points. The highest overall total will win a powerful graphics processing unit. The event will also explore issues such as model consistency and challenges across different languages. Dr. Seraphina Goldfarb-Tarrant, head of AI safety at Cohere, believes that raising awareness of model hallucinations and inconsistencies will be useful. Language models may provide different answers depending on the language asked, which poses concerns in terms of safety mechanisms. The models’ robustness does not guarantee the absence of vulnerabilities, so this event presents an opportunity to find and address them. The White House supports the effort, as it aims to provide critical information to researchers and the public while enabling AI companies to fix any discovered issues. The pace of AI development has raised concerns about the spread of disinformation, particularly in relation to the upcoming US presidential election. Although voluntary safeguards have been implemented by leading AI companies, legal safeguards are still being negotiated. Dr. Chowdhury emphasizes the importance of addressing current AI problems and potential harms and biases rather than focusing on existential threats. By identifying flaws in the models, the event seeks to prompt a response from tech companies. Dr. Goldfarb-Tarrant urges governments to regulate AI now to prevent the spread of misinformation. Ultimately, this event aims to ensure that AI models are free from bias and discrimination, leading to more advanced and ethical AI systems in the future. The companies involved will have access to the gathered data and can respond accordingly, while independent researchers can request access to the data. The results will be published in February, shedding light on the current state of AI and the steps needed to improve its integrity.
Related Posts

The Shift of AI Power: A Turning Point in Global Technology
The recent rise of DeepSeek, a Chinese AI model that has quickly ascended to the top of the Apple App…

The impact of the EU accusation on X’s blue tick accounts
Elon Musk and his social media site X are facing accusations from the European Union regarding the misuse of “verified”…

The Impending Threat to Apple’s iPhone Sales in China and the Rise of Huawei
Apple’s iPhone sales in China have been declining rapidly, with a 24% drop in the first six weeks of 2024…