Disruptive technologies and positive change

Artificial intelligence, ethics, digital transformation and international standards

By Michael A. Mullane

This is an edited excerpt from a new OCEANIS (open community for ethics in autonomous and intelligent systems) think piece about the role of standards in developing the dependability and trustworthiness of AI-related technologies. IEC is a founder member of OCEANIS and contributed to the publication. Download Role of standards in facilitating innovation while addressing ethics and value in autonomous and intelligent systems here.

Artificial intelligence Photo Gerd Altmann from Pixabay

Artificial Intelligence Systems (AIS) are the key to enabling digital transformation and are already changing many aspects of daily life. Related technologies are being applied to boost efficiency, solve problems and create scalable individualized experiences. Finding answers to the many ethical dilemmas raised — that take into account issues such as privacy, security and integrity for the widest possible benefit — is vital to the development of innovative A/IS technologies.

Digital transformation is not only about re-imagining business in the digital age to deliver greater value to customers. It is essential that ethical considerations shape the design process in order to maximize public good while limiting the risk of inadvertent harms or unintended consequences. International standards developed by multiple stakeholders should ensure the right balance is struck between the desire to deploy A/IS rapidly and the need to study their ethical implications.

Above all, digital transformation can be most successfully implemented where trust is achieved through transparency and the process is driven by ethical principles. AI relies on data sets, including personal information. How this data is collected, managed and used is also an ethical issue. All stakeholders must have a clear understanding of what organizations hope to achieve and how they will use the data. Full permission must be granted to use personal information with adequate understanding of the likely consequences.

A key issue is bias and fairness of Algorithmic Decision Making Systems (ADMS). While it may be relatively easy to detect and mitigate bias, it is often difficult to get to the bottom of how ADMS are making decisions in order to solve the problems, as more often than not algorithms operate within a ‘black box’. It is one of the most important challenges we face, as algorithms are increasingly at the centre of our daily lives, from search engines and online shopping to facial recognition systems and booking flights. There are ethical concerns, for example, about the use of data collected by facial recognition applications, including development bias.

Wider adoption of such technologies and systems will depend to a large extent on effective risk management and the joint technical committee set up by IEC and ISO (JTC1/SC 42) is carrying out important standardization work in this area. A new ISO/IEC standard will provide guidelines on managing risk faced by organizations during the development and application of AI techniques and systems. It will assist organizations in integrating risk management for AI into significant activities and functions, as well as describe processes for the effective implementation and integration of AI risk management.

Disruptive technologies like artificial intelligence pose both challenges and opportunities across all sectors. For this reason, the joint ISO and IEC technical committee is liaising with a number of committees, in both organizations, that focus on different technologies and industries, as well as external organizations and consortia.

Gallery
Artificial intelligence Photo Gerd Altmann from Pixabay