As thousands of hackers gather at Def Con 31 in Las Vegas, the White House is supporting an event aimed at exposing potential flaws in artificial intelligence (AI) models. The event will focus on large language models, particularly chatbots like OpenAI’s ChatGPT and Google’s Bard. Organized by Dr. Rumman Chowdhury, chief executive of Humane Intelligence, the competition aims to identify problems in AI systems and create independent evaluations. While the participating companies, including Meta, Google, OpenAI, and Microsoft, acknowledge the potential for issues, the event serves as an opportunity to challenge their models against hackers’ attempts to find flaws. Over two-and-a-half days, 3,000 hackers will work individually to uncover vulnerabilities in eight AI models. The hackers will not know which company’s model they are testing, and successful challenges will earn points. The highest overall total will win a powerful graphics processing unit. The event will also explore issues such as model consistency and challenges across different languages. Dr. Seraphina Goldfarb-Tarrant, head of AI safety at Cohere, believes that raising awareness of model hallucinations and inconsistencies will be useful. Language models may provide different answers depending on the language asked, which poses concerns in terms of safety mechanisms. The models’ robustness does not guarantee the absence of vulnerabilities, so this event presents an opportunity to find and address them. The White House supports the effort, as it aims to provide critical information to researchers and the public while enabling AI companies to fix any discovered issues. The pace of AI development has raised concerns about the spread of disinformation, particularly in relation to the upcoming US presidential election. Although voluntary safeguards have been implemented by leading AI companies, legal safeguards are still being negotiated. Dr. Chowdhury emphasizes the importance of addressing current AI problems and potential harms and biases rather than focusing on existential threats. By identifying flaws in the models, the event seeks to prompt a response from tech companies. Dr. Goldfarb-Tarrant urges governments to regulate AI now to prevent the spread of misinformation. Ultimately, this event aims to ensure that AI models are free from bias and discrimination, leading to more advanced and ethical AI systems in the future. The companies involved will have access to the gathered data and can respond accordingly, while independent researchers can request access to the data. The results will be published in February, shedding light on the current state of AI and the steps needed to improve its integrity.
Related Posts
The Impact of Cloned Rhesus Monkey in Medical Research
Cloning has been a topic of controversy and ethical debate for many years. The recent announcement of the successful cloning…
Investigation Reveals Missing Bolts in Boeing Door Incident
A recent report by the US National Transportation Safety Board has revealed that a door on a Boeing 737 Max…
US Takes Action Against Russian Cybersecurity Firm, Kaspersky
The US government recently announced plans to ban the sale of antivirus software made by Russian cybersecurity firm Kaspersky, citing…