Tech developers are so obsessed with ensuring artificial intelligence projects are accurate and successful that they are sacrificing security, privacy and ethics when modelling their machine learning solutions.
According to the "AI Adoption in the Enterprise" 2019 report, published by O’Reilly, security is the most serious blind spot.
Nearly three-quarters (73%) of respondents indicated they do not check for security vulnerabilities during model building. More than half (59%) of organisations also do not consider fairness, bias or ethical issues during machine learning development. Privacy is similarly neglected, with only 35% checking for issues during model building and deployment.
Instead, the majority of developmental resources are focused on ensuring AI projects are a success. While most (55%) developers mitigate against unexpected outcomes or predictions, this still leaves a large number who do not. Furthermore, 16% of respondents do not check for any risks at all during development.
This lack of due diligence is likely due to numerous internal challenges and factors, but the greatest roadblock hindering progress is cultural resistance, as indicated by 23% of respondents.
The overwhelming majority of organisations (81%) have started down the route of AI adoption. Most are in the evaluation or proof of concept stage (54%), while 27% have revenue-bearing AI projects in production. A significant minority (19%) of companies have not started any AI projects.
When it comes to deployment, AI is most likely to be used in research and development departments (50%), followed by customer service (34%) and IT (33%). Legal functions have seen the least innovation, with only 5% making use of the technology.
However, nearly one on five (19%) organisations which have committed to AI struggle to adopt the technology due to a lack of data and data quality issues, as well as the absence of necessary skills for development.
The most chronic skills shortages by far were centred around machine learning modelling and data science (57%). To make progress in the areas of security, privacy and ethics, organisations urgently need to address these talent shortages, the report insists.
O’Reilly chief data scientist Ben Lorica said: "AI maturity and usage has grown exponentially in the last year. However, considerable hurdles remain that keep it from reaching critical mass.
"As AI and machine learning become increasingly automated, it’s paramount organisations invest the necessary time and resources to get security and ethics right. To do this, enterprises need the right talent and the best data. Closing the skills gap and taking another look at data quality should be their top priorities in the coming year."