It doesn’t take long when reading mainstream or social media to bump into discussions related to the ethical use of data and the introduction of artificial intelligence (AI). Recent high court decisions on the use of facial recognition technology in policing in the UK. The challenges facing all governments with the introduction of contact tracing. The impact of algorithms on everyday life events. Social media bias, insurance and recent education results - all of these deal in some way with a decision about what is legal and right to do with data.
Not all of the challenges are due to a lack of policies for new technology. The recent High Court judgement on the use of facial recognition technology, though a complex regulatory issue, called out the failure of South Wales Police to perform standard assessments that could have been expected for this type of data process. Namely, the force failed to address the following criteria:
None of these failures was down to a lack of policies or regulation, nor a technical mis-understanding. They were all down to poorly-applied practices.
Admittedly, there are circumstances where the complexities of using new tech have yet to be fully understood and regulated. But for the majority of businesses, the processes and policies that you should be applying are already out there.
In the UK, we are blessed that our Government (and some regulators) is significantly focused on the increasing opportunities offered by data and AI. However, as a chief data officer trying to wade through current government policy and investment can take a significant amount of time and research.
And as a C-suite officer, having access to readily digestible understanding of the issues you should be briefed on is difficult, unless your organisation already has dedicated resource focused on this topic.
There are a multitude of different government organisations and industry bodies, all of which have in some way created recommendations, tools and guidelines for the subject. The purpose of this article - and two that will follow - is to summarise who is currently active, what they are looking at and why, and the opportunities for you and your business to access standards or to take part in innovations where data ethics and AI are sensitively considered.
The last 3 to 4 years have seen an explosion of government initiatives in areas associated with data, AI and ethics. Table 1 shows some of the many different bodies that are working to establish policies and standards and the complexity of their inter-relationships. All of the players shown here have a government link. They are the regulators, thought leaders and established authorities across government bodies or are government-funded in some way.
The information accessible to a data practitioner outside of these organisations is a mixture of publicly-available statements, consultations and policy or standards documents. I would expect that as a member of some of these bodies, you would find further collateral to supplement this.
Looking at these players and their published intelligence reveals that they are split into two camps; firstly, those that are focused on the creation of ethical standards, and secondly, those that are dealing with the application of standards. Creation tends to involve consultation, discussion and ultimately policies. Application provides more tangible, practical approaches to applying ethical standards within a business.
One organisation that has recently spanned both of these is the Information Commissioner’s Office (ICO), by providing guidance on AI and data protection alongside what current GDPR rules mean for the use of AI.
Somewhat confusingly, the different bodies also seem to be looking at the impact of data and AI ethics in a particular area of interest, but not necessarily a vertical industry. For example, The Alan Turing Institute, UK Statistics Authority and MIGarage are all concerned with ethics within analytical activities, whereas National Data Guardian, Open Banking and Digital Regulation Cooperation Forum [DRCF] are concerned with standards for consumer data within their specific business area (use of NHS patient data, open banking and “all online consumer data” respectively).
Furthermore, detailed review of each player’s published information shows that some are much further advanced in their published thinking than others. A summary of current observations in line with the above can be seen in Figure 1, ranking from high to low on the relative level of development that can be publicly observed.
Set up by the government in 2018, the Centre for Data Ethics and Innovation (CDEI) “has a unique remit: to help the UK navigate the ethical challenges presented by AI and data-driven technology… led by an independent board of experts from across industry, civil society, academia and government. CDEI publications do not represent government policy or advice…tasked by the Government to connect policymakers, industry, civil society, and the public to develop the right governance regime for data-driven technologies.”
Given its position across all elements of government, industry, academia, etc, we can expect the CDEI to become the prominent place for consultation and commentary for the UK, working directly will all the other players mentioned above. One of the first outputs published by the CDEI is the AI Barometer. First published in July 2020, this in-depth research identifies the most challenging opportunities, risks and governance issues with AI across five sectors, covering criminal justice, financial services, health and social care, digital and social media, and energy and utilities.
If you do nothing else, read the executive summary and summary of findings. It will equip a senior executive with an understanding of the barriers an organisation will find when introducing or implementing AI.
While the ICO and CDEI provide significant analysis on the challenges, if you are looking for direct support from a framework, diagnostic tool or similar capability, then two organisations stand out. Both the Open Data Institute and MIGarage have published data ethics frameworks.
The frameworks lead you to question or consider theoretical and practical questions you should answer when considering the impact of using data within AI or analytical solutions across your business. The frameworks are applicable outside of their individual specialist areas (open data and machine intelligence/analytics respectively), but you should consider your business application and identify any additional areas that may be omitted from your chosen framework.
The ODI Data Ethics Canvas is based on an overall business ethics framework originally published by the ADAPT academic centre in Ireland. It provides a comprehensive set of questions and considerations that are backed up by access to experts and training if required.
MIGarage’s framework provides a similar level of questions and is closely aligned to the OECD’s principles of AI and other prominent EU research. The comprehensive reference material sitting behind this framework is an excellent place to investigate the reasons behind the questions posed.
Having found a basis to think about the challenges with data ethics and AI, it then is worth thinking through how well-developed different industry areas and their regulators are. Also, as a board member, you need to consider what risks you should be monitoring.
These and other topics will be addressed in subsequent articles.
Helen Crooks is a chief data officer and data NED/advisor