Apple has recently joined a group of major tech companies in committing to follow the guidelines set by the Biden administration for the development of artificial intelligence (AI). This move comes as the White House aims to address potential risks associated with AI technology.
Apple is now part of a coalition of 15 companies, including Amazon, Google, Meta, Microsoft, and OpenAI, that have pledged to incorporate government access to their AI models’ test results to evaluate biases and security risks.
Under these guidelines, companies like Apple are required to adhere to strict standards and tests for their AI models. This includes subjecting their models to rigorous “red-team” tests to assess the models’ safety measures against potential adversarial attacks. The primary goal of these stress tests is to mitigate the risks that AI systems may pose to critical infrastructure and cybersecurity.
Companies that have signed the pledge also commit to developing AI models with user privacy in mind and following guidelines established by the Department of Commerce to prevent AI-enabled fraud and deception.
President Biden’s executive order has also tasked federal agencies with developing AI-related standards and guidelines.
The White House reported that agencies like the Commerce Department, the Department of Energy, and the Department of Defense have introduced new guidelines to prevent AI misuse, expanded AI testbeds, and addressed vulnerabilities in government networks related to AI.