In an unprecedented move, 18 nations, including the United States, the United Kingdom, Australia, and 15 other countries have banded together to establish global guidelines aimed at strengthening cybersecurity within artificial intelligence (AI) firms. The focus of these guidelines is on secure-by-design models.
As per the information published by Cointelegraph on Sunday, the 18 countries unveiled a comprehensive 20-page document on November 26. This document lays out the cybersecurity practices that AI firms are advised to adhere to. It underscores the necessity of making security a principal concern within the rapidly evolving AI industry, rather than an afterthought.
Among the guidelines are recommendations for maintaining stringent control over AI model infrastructure, vigilance for model tampering, and training employees about cybersecurity risks. Notably, the guidelines did not touch upon debated issues such as the control over the use of image-generating models and deep fakes, or the methods of data collection and their application in training models.
U.S. Secretary of Homeland Security, Alejandro Mayorkas, highlighted the significance of cybersecurity in AI systems. He described AI as “the most consequential technology of our time”.
The “secure by design” guidelines are the latest in a series of government measures related to AI, following initiatives like the AI Safety Summit in London, the EU’s AI Act, and U.S. President Joe Biden’s executive order on AI safety and security.
AI firms such as OpenAI, Microsoft (NASDAQ: MSFT), Google (NASDAQ: GOOGL), Anthropic, and Scale AI have expressed support for these guidelines and contributed to their development.
Why It Matters: With the ever-increasing reliance on AI technology across various sectors, the need for robust cybersecurity measures has never been more critical. These guidelines aim to set a global standard for AI firms to follow, ensuring the security of AI systems and data. The support from leading AI firms further validates the importance and timeliness of these guidelines.