The Alan Turing Institute has joined forces with the Information Commissioner’s Office to publish draft regulatory guidance into the use of artificial intelligence, which has now been put out to consultation.
The move is in response to calls from an independent review and the Government’s AI Sector Deal and is designed to help organisations explain how AI-related decisions are made to those affected by them.
The draft guidance lays out four key principles, rooted within GDPR, which organisations must consider when developing AI decision-making systems.
Be transparent: Make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.
Be accountable: Ensure appropriate oversight of your AI decision systems, and be answerable to others.
Consider context: There is no one-size-fits-all approach to explaining AI-assisted decisions.
Reflect on impacts: Ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.
In a blogpost, ICO executive director for technology policy and innovation Simon McDougall said: "Real-world applicability is at the centre of our guidance. Feedback is crucial to its success and we’re keen to hear from those considering or developing the use of AI. Whether you’re a data scientist, app developer, business owner, CEO or data protection practitioner, we want to hear your thoughts."
The consultation ends on January 24, 2020.
Late last week, the European Commission’s new president Ursula von der Leyen called for the EU to draw up new legislation to govern the use of AI within 100 days of her taking office, arguing that "with GDPR, we set the pattern for the world. We have to do the same with artificial intelligence".