There are numerous similarities for data leaders to follow from the way General Data Protection Regulation (GDPR) was introduced and introducing an ethical artificial intelligence (AI) framework. For those who were there when GDPR (now the UK Data Protection Act 2018) was brought in, they will remember the importance of having an inventory of where data resides within the organisation as well as using data protection impact assessments when a project involved high-risk data.
Ethical frameworks for AI follow a similar approach and the lessons learnt from GDPR implementation can be used as a guide for this new era of data.
Starting the framework
The Organisation for Economic Co-operation and Development (OECD) produced its Principles for Ethical AI, which were adopted in May 2019. Alongside the OECD’s series of values-based principles, were a published list of recommendations for policy makers:
Values-based principles
Recommendations for policy makers
In April 2021, the European Commission proposed the first EU regulatory framework for AI, stating that AI systems used in different applications should be analysed and classified according to the risk they pose to users. The different risk levels will indicate the amount of regulation required.
Implementing internally
Data leaders need to locate pockets of the enterprise where teams are considering use cases of AI and reach out for early adopters of any framework being suggested. By building the importance of an ethical approach, a stronger level of success and implementation will be achieved and set the path for continued data culture evolution embracing an ethical AI framework.
Organisations using AI should develop an AI risk impact assessment framework, which should have similarities to data protection risk impact assessments. Software tools can support this process which typically involves answering questions relating to the use case, with high-, medium- and low-risk consequence answers.
Data leaders must consider the degree to which the use case has autonomous decision making once the human is no longer in the loop. Naturally, if there would be legal impacts of the decision, the risks rise considerably. Additionally, if there would be considerable reputational damage it is important to identify the best ways to mitigate the risk.
It is important to keep revisiting the risks once they have been identified – it is not a one-time situation as risks evolve. Data leaders should complete several risk impact assessments when the team are building the product and another when the ultimate use is defined.
Many companies are introducing AI forums which have representatives from across the business, which is reflective of the data protection officer committees that emerged in 2016. Procurement and IT are critical functions to be addressed from an ethical framework point of view as most of these applications require either internal technical resources or external vendor support. Data leaders should consider asking all vendors for their own ethical AI policies which can be included in vendor contracts.
For teams to comprehensively understand how to implement ethical AI, it is important for there to be a degree of training, which has similarities to the initial implementation of GDPR training.
Ethical frameworks are nothing new. However, as more companies experiment with AI – which removes the human – the need to consider consequences has forced a review of the approach to ethical data processing. Those with strong ethics as part of their values have found this a key benefit in making AI implementation decisions.
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024