The onward march of algorithms and the biases contained within

Toni Sekinah, research analyst and features editor, DataIQ

“Algorithms are coming. Embrace them.” This is what I was told at a party on I attended on Saturday evening. I was talking to a machine learning specialist and we broached the subject of automated decision-making. I expressed that I have reservations thanks to what I’ve learnt by researching this matter, not least that bias can creep in very easily. However, my concerns were met with the above statement.

Algorithms can be used to automate decisions about hiring or loan applications, for example. If the automated decision-making process is biased, it could result in more men being asked to interview, or residents of a particular post code having loan applications denied.

I was slightly unsettled by this response. Am I being unreasonable or are my fears valid? Maybe they are.

There are events, institutes and books dedicated to the issue of biases in algorithms. Academics at MIT felt that algorithmic bias is such a challenge that an event was held in July 2017 at the world-renowned institute so that experts could discuss the matter.

The event was also used to unveil the AI Now Institute, an interdisciplinary research centre based at New York University. The four core pillars of the institute’s work are; rights and liberties, labour and automation, bias and inclusion, and safety and critical infrastructure.

With respect to bias, the AI Now Institute “researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.” The institute hosts annual symposia with presentations on everything from uncovering machine bias to data colonialism. The institute also publishes a report following each symposium.

Handshake"Weapons of math destruction", a book published in 2016, also highlights the gravity of the problem of algorithmic bias with its chilling subtitle, "How big data increases inequality and threatens democracy". According to the reviewers, author Cathy O’Neil uses real-life examples to lift the curtain on big data, revealing insidious algorithms that distort the truth while purporting to be benevolent.

I’m pleased that more awareness is being drawn to the fact that algorithms are inherently not neutral. We should stop viewing them as such.

In my non-expert opinion, we should stop expecting algorithms to produce perfect results from imperfect data drawn from an imperfect world. Over the course of the last year writing about the data industry, I have heard the following phrase on several occasions: “Garbage in, garbage out.”

The machine learning specialist I encountered at the party argued that the algorithms themselves cannot be biased. I agreed and added that they wouldn’t produce biased decisions or conclusions if care were to be taken to ensure that only "pure" data is input into the algorithms.

It was as I said this that I realised there can be no such thing.

Please note that blogs are the sole view of the author and that they are not neccesarily the view of IQ ddg Ltd and should not be interpreted as advice. Please read our full disclaimer

Knowledge-based content manager, DataIQ
Toni is the senior features editor responsible for the origination of DataIQ's interviews, articles and blogs.