“If we don’t get enough women into AI, we will all die.” That is not a hostage demand gone bad, rather it is a genuine concern that a critical data-driven technology could develop to include a gender bias which may have terrible unintended consequences. Since it was expressed by a data science PhD from UCL, you can be sure the statement is underpinned by real data.
In this case, the data scientist was Maxine Mackintosh, who as well as being in the second year of her PhD is also the founding director of One HealthTech, a network which champions and supports under-represented groups in health innovation, particularly women, to be the future leaders in healthcare. She made her bold statement at the inaugural “Why women in AI” event organised by AI marketing intelligence platform CognitionX and hosted by investment managers Rathbone.
Mackintosh was referencing the track record of the US Food and Drug Administration (FDA) which is responsible for licensing drugs in America. Not only are women twice as likely as men to develop an adverse reaction to a FDA-approved drug, but of all the drugs which subsequently get pulled from the market, eight out of ten proved to be more harmful to women than to men.
The reason? “We need to encourage more women into clinical trials, but that makes them more expensive because women have periods,” pointed out Mackintosh. As a consequence, most of those trials use young, healthy men in their 20s who suffer relatively few reactions and are more “reliable” as test subjects. (Find out more here.)
She sees the same problem emerging in technology, especially artificial intelligence (AI). “There is an under-representation of women in the data sets being used. That means the solutions are not fit for women. Women die. Humanity stops,” said Mackintosh.
Her bold and entertaining section of an enlightening, inspiring and well-attended (by both sexes) event included the depressing statistic that only 11% of the open source community are women, accoring to GitHub. However, among this group, 25% have 100% of their software accepted, compared to only 13% of men. “Their code is accepted more frequently - but only if they are not recognised as women. If they are, they are less likely to have it accepted,” she said.
The essential question hanging over the evening was whether an emergent domain like AI can have this kind of gender balance corrected, or whether it is unrealistic to expect to fix a problem that society has failed to address for centuries. Deep learning, which underpins AI, involves ingesting huge volumes of data which reflect the prevailing culture. If this culture has a gender bias, so will the data. Applying data science to remove that bias - which is increasingly possible - means the training data is no longer an accurate reflection of society, which could result in other unintended consequences.
Joanna Bryson, reader at University of Bath, and affiliate at the Center for Information Technology Policy at Princeton University, has studied gender bias within language extensively as part of her research into developing human-like intelligence in robotics. “Word embedding is what makes search engines work,” she pointed out, but implicit biases such as those revealed by the Implicit Association Test show strong links between words and even names and the idea of women as domestic servants. (You can check your own bias here.)
This surfaced clearly in an impassioned discussion about Amazon’s Alexa (and other AI-driven assistants) which not only have a female voice, but perform in a domestic role, reinforcing the idea that, as one of the audience put it, “shouting at women gets them to do things.” Another had removed Alexa from the family home precisely because of the way his two young sons were learning to behave.
Bryson suggested three possible sources of gender bias in AI. “There could be evil, malign programmers who are deliberately building it in. Those programmers could just be sloppy and inadequate. Or it could be our culture,” she said.
Accepting an existing bias is not the same as accepting it should be replicated in a new technology. Bryson separately noted that such events might need to apply their own version of the Bechdel test - a measure of gender bias in films that looks at whether a drama contains two female characters having a conversation about something other than a male character - by having speakers talk about anything other than gender.
“There is no point putting out more rules and controls in a world where we genuinely seem to have lost sight of doing the right thing."
“Culture can happen to you - or you can make it happen,” argued Tracey Groves, lead partner on corporate governance, reporting and compliance at PwC and also co-chair of its gender balance network. (Her first act was to change its name from the women’s network.) The consultancy struggles to reach having women as 20% of senior partners compared to 23% of women in leadership roles across UK plc.
But Groves said the solution is not regulation or enforcement of gender quotas. “Companies I talk to don’t understand why they experience bad events when they are spending millions on compliance and training,” she pointed out. “There is no point putting out more rules and controls in a world where we genuinely seem to have lost sight of doing the right thing. We need to understand the vision, purpose and values and embed those into the organisation.”
That is a good rallying cry for AI and is more likely to be effective than legislation. Dr Sandra Wachter, postdoctoral researchfellow in data ethics at the Oxford Internet Institute, University of Oxford, pointed out that the current widespread belief that GDPR will make AI-driven algorithms transparent - and therefore lead to them become fairer - is actually false. “We all want AI to make fairer, more accurate and transparent decisions and ensure algorithms don’t replicate gender bias,” she said.
Despite GDPR enhancing the right to know that an automated decision has been made by creating a right to human intervention, the protection of trade secrets will limit what gets disclosed to just the functionality of the system, not the rationale behind the decision. With AI, surprisingly, the UK is ahead of the world by having already established an ethical framework - the EPSRC Principles of Robotics - which is intended to limit the extent to which human bias gets transferred into technology.
These biases are not the same everywhere. Patricia Chiappi, machine learning research scientist at Google Deep Mind, noted from her personal history that, “Italy is neutral about women studying science and about switching from arts to science. We need research to tackle the cultural biases in data and we need to change algorithms to make them objective.”
With AI entering the consumer domain at a rapid rate, via personal assistants, automated decisioning, chat bots and the rest, acceptance is approaching a tipping point. Trying to head off gender bias at this stage could be a force for creating a better, fairer and more productive society. But only if, as consumers, this is what we demand. After all, the reason why Alexa is female (despite one audience member attempting the defence that “the domain happened to be available”) is because it helps sales. That is a reflection of our own cultural bias and its commercial impact. As Bryson noted: “Robots are not your friends - they are an extension of corporations.”
DataIQ is looking at the issue of gender bias within the data and analytics sector at the DataIQ Summit 2017. Come and hear our CXX Panel, featuring Barclay’s Fedelma Good and RELX’s Robbie Burgess, on 24th May. For more information and tickets, go here.
Thank you for your input
Thank you for your feedback
DataIQ is a trading name of IQ Data Group Limited
10 York Road, London, SE1 7ND
Phone: +44 020 3821 5665
Registered in England: 9900834
Copyright © IQ Data Group Limited 2024