Organisations developing services using artificial intelligence could face regulatory action if they fail to explain decisions assisted by the technology - and with it reputational damage - according to the first joint guidance developed by the Information Commissioner’s Office and Alan Turing Institute.
The document, which is aimed at helping organisations explain the processes, services and decisions delivered or assisted by AI, reflects growing concerns about the ethical and data protection implications of the technology in public and private services.
The guidance consists of three parts, the first of which covers the basics of explaining AI. This is aimed at data protection officers and compliance teams, but is also relevant to anyone involved in the development of AI systems.
The second part deals with explaining AI in practice, outlining a series of tasks that include selecting priority explanations depending on use cases and the impact on individuals, collecting and pre-processing data, and building a system to extract relevant information from a range of explanation types.
The third part covers what AI means for an organisation. This includes setting out policy and procedures, such as explaining why a specific AI model was selected, keeping an audit trail and providing the relevant documentation.
In a recent blogpost, ICO executive director for technology policy and innovation Simon McDougall said: "The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works. And when people don’t understand a technology, it can lead to doubt, uncertainty and mistrust.
"ICO research shows that over 50% of people are concerned about machines making complex automated decisions about them. In our co-commissioned citizen jury research, the majority of people stated that in contexts where humans would usually provide an explanation, explanations of AI decisions should be similar to human explanations.
"The decisions made using AI need to be properly understood by the people they impact. This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built-in to AI systems."