IEC and ISO committee on AI expands programme of work

New work items in the areas of trustworthiness and computational methods launched

By Antoinette Price

IEC and ISO develop international standards for AI. SC 42, the joint committee of IEC and ISO tasked with this work, recently approved new standards projects in the areas of trustworthiness and computational methods.

“International standards can help accelerate the adoption of AI by simultaneously addressing the technical requirements of emerging applications and providing a mechanism to ensure that  expectations we have on the technology, such as trustworthiness, are met,” said Wael William Diab, Chair of SC 42. “The two newly approved projects complement and build on the broad portfolio of the committee’s work looking at the entire AI ecosystem.”

International standards build trust in AI systems

Increasingly, many industries such as healthcare, manufacturing and transport, use innovative artificial intelligence (AI) technologies in their services and products. As more people work with non-human entities which deploy different AI technologies, it is important to ensure that these systems are trustworthy.

SC 42 has already identified certain characteristics of trustworthiness, such as accountability, bias, controllability, explainability, privacy, robustness, resilience, safety and security.

New standard for assessing robustness in AI systems

Artificial intelligence (AI) systems must be able to maintain their level of performance under any conditions, in other words remain robust.

Work has begun on a new standard – ISO/IEC 24029-2, Artificial Intelligence (AI) -- Assessment of the robustness of neural networks -- Part 2: Formal methods methodology – which will provide methodology on the use of formal methods to assess robustness properties of neural networks. It will focus on how to manage and put in place formal methods to ensure robustness properties.

“This project will build on the ISO/IEC TR 24029 part 1 of the series that provided an overview of the topic and complements the portfolio of AI trustworthiness deliverables that SC 42 is working on”, said Diab.

Who will benefit?

Engineers 

  • Must consider which properties are desired in a system and how to translate these in terms that formal methods can assess.
  • Must consider how to express the robustness properties checked using formal methods.
  • Must decide which formal method to use depending on the kind of properties required in a system.

Industry and commerce

The standard will help increase the trust in AI systems commercialized, improve the time allocated in validation of neural networks, as well as the quality of neural network overall performance. Providers will be able to advocate stronger safety or performance properties on their products, which will in turn reduce barriers to adoption.

Governments and consumers

Formal validation of AI systems ensure higher quality systems with better understanding and assurances to end-customers and users. Explicit robustness properties of neural networks will help achieve this by improving risk management of AI systems.

Academic and research bodies

Evolving neural networks technologies will encourage the development of new techniques of formal validation of neural networks within the guidelines proposed, which could increase the use of neural networks technologies in research programmes.

The new project will be placed in SC 42 working group 2 that is focused on AI Trustworthiness.

New project for assessing classification performance for machine learning models

Work has begun on a new Technical Specification which will specify methodologies for measuring classification performance of machine learning (ML) models, systems, and algorithms.

For example, ML allows for modelling procedures to be easily transferred across data sets without necessarily considering possible covariates hidden in, and unique to, those data sets, nor the potential contextual differences that influence the choice of metric. The relative ease with which modern ML can be implemented may absorb unintended biases or, more concerning, biases that we cannot detect.

Moreover, it is essential to be able to objectively and consistently establish report performance in a quantitative measure. This enables controlling for the behaviours of deployed machine learning, through a quantitative evaluation of its performance.

The project is tasked with addressing these concerns and building on the current AI and ML horizontal standards being developed by SC 42 in the areas of foundational concepts, frameworks, terminology, trustworthiness, computational approaches, governance implications, ethics and societal concerns, use cases and applications.

Who will benefit?

AI and associated machine learning algorithms are being deployed across a large variety of application domains and sectors with an ever-increasing set of stakeholders. As this project will develop quantitative methods to evaluate ML performance, its applicability is expected to be wide, benefiting consumers, regulators, government, startups to small and large industry, academic researchers and other SDOs in this area.

The new project will be placed in SC 42 working group 5 that is focused on AI Computational Methods and Techniques.

Find out more about AI standards.

Gallery