The UK is at risk of losing out on the rewards of AI and data-driven technology due to widespread distrust of many of the industry’s practices, which can only be tackled through concerted action from both industry and Government.
That is according to a new analysis from the Centre for Data Ethics & Innovation, dubbed the AI Barometer.
The report details the most pressing opportunities, risks and governance challenges associated with AI and data use in the UK, across the five key sectors of criminal justice, financial services, health and social care, digital and social media, and energy and utilities).
The CDEI convened over 120 experts from industry, academia, civil society and government, who have found that AI and data-driven technology present concrete ’game changing’ opportunities.
Among these are operating an efficient green energy grid; identifying and tracking public health risks; tackling misinformation; and using automated decision support systems in health, finance and criminal justice to minimise bias.
It finds, however, that these opportunities have common characteristics which make them challenging to realise, such as requiring coordination across organisations and affecting decisions that have a direct and substantial impact on people’s lives.
The AI Barometer identifies several barriers to the responsible adoption of AI and data-driven technology, among them a lack of funding for innovation projects and a dearth of technical skills to power new initiatives.
While many of these issues are widely acknowledged, the CDEI highlights several barriers that are often overlooked, including low data quality and availability; a lack of co-ordinated regulation; and a lack of transparency around how AI and data are being used.
The CDEI cautions that these barriers feed public mistrust, which acts a further brake on innovation. It highlights that, in the absence of trust, consumers are unlikely to use new technologies or share the data needed to build them, while industry will be unwilling to engage in new innovation programmes for fear of meeting opposition.
The analysis also looks at the risks arising from the use of this technology. Experts in most sectors ranked the following issues as concerning: algorithmic bias; a lack of explainability in algorithmic decision-making; and the failure of those operating technology to seek meaningful consent from people to collect, use and share their data.
The CDEI will expand the AI Barometer over the next twelve months and will launch a new work programme that will act on its findings.
Even so, the AI Barometer highlights several best practice initiatives that have recently been launched by regulators, researchers and industry to spur responsible innovation.
These include the the Information Commissioner’s Office Sandbox Programme, the West Midlands Police Ethics Committee and the guidance on explaining decisions made with AI from the ICO and the Alan Turing Institute.
CDEI chair Roger Taylor said: "AI and data-driven technology has the potential to address the biggest societal challenges of our time, from climate change to caring for an ageing society. However, the responsible adoption of technology is stymied by several barriers, among them low data quality and governance challenges, which undermine public trust in the institutions that they depend on.
"As we have seen in the response to Covid-19, confidence that government, public bodies and private companies can be trusted to use data for our benefit is essential if we are to maximise the benefits of these technologies. Now is the time for these barriers to be addressed, with a co-ordinated national response, so that we can pave the way for responsible innovation."