Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems.
It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in the absence of legislation. Earlier this year, Ottawa and Washington announced similar guidelines for each of their countries.
The release of guidelines comes as businesses release and adopt AI systems that can affect people’s lives, without national legislation.
The latest document, Guidelines for Secure AI System Development, is aimed primarily at providers of AI systems who are using models hosted by an organization or are using external application programming interfaces (APIs).
“We urge all stakeholders (including data scientists, developers, managers, decision-makers, and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems,” says the document’s introduction.
The guidelines follow a ‘secure by default’ approach, and are aligned closely to practices defined in the U.K. National Cyber Security Centre’s secure development and deployment guidance, the U.S. National Institute for Standards and Technology’s Secure Software Development Framework, and secure by design principles published by the U.S. Cybersecurity and Infrastructure Security Agency and other international cyber agencies.
They prioritize
— taking ownership of security outcomes for customers;
— embracing radical transparency and accountability;
— and building organizational structure and leadership so secure by design is a top business priority.
Briefly
— for safe design of AI projects, the guideline says IT and corporate leaders should understand risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design;
— for secure development, it is recommended organizations understand AI in the context of supply chain security, documentation, and asset and technical debt management;
— for secure deployment, there are recommendations covering the protection of infrastructure and models from compromise, threat, or loss, developing incident management processes, and responsible release;
— for secure operation and maintenance of AI systems, there are recommendations for actions such as including logging and monitoring, update management, and information sharing.
Other countries endorsing these guidelines are Australia, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, South Korea and Singapore.
Meanwhile, in Canada…
Privacy 2024 Recap – some significant decisions, slow progress for reform
The past year saw a few court decisions of note as well as halting progress toward privacy…