AI’s pathway to trust

AI and machine learning models are proliferating in the enterprise, but can we have 100% confidence in the reliability of these complex technologies? Silicon Valley veteran Ben Lorica discusses removing bias in AI, “de-risking” and more.

Very few technology fields spark debate quite so readily as artificial intelligence.

How quickly will AI and related advances change the way we work? Are jobs really under threat? Are we in control of its development? And how do we actually define AI, once the headline-writing and posturing is stripped away?

Another intriguing topic is the presence of human biases in the AI models that we build. One of the first arguments for AI adoption is the notion that these systems are completely bias-free – but if the data that feeds them hasn’t been properly scrutinised, or scrutinised by the wrong people, then the risk of prejudiced AI is very real. One of computer science’s oldest adages – “garbage in, garbage out” – rings truer than ever in the age of AI.

Many examples of models demonstrating bias have come to light as enterprises have embraced AI. One of the most notorious cases was the ProPublica investigation into the COMPAS algorithm used in the United States to guide prison sentencing. The system “found that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk” of reoffending. Instances of gender bias have also been reported, with one study into Google image searches for “CEO” showing just 11% female representation, despite 27% of CEOs in the United States being women.

IBM research indicates that over the next five years, biased AI systems will increase in number. So how does enterprise face up to an issue that carries potentially harmful and far-reaching consequences?

Ben Lorica (pictured) has been at the centre of Silicon Valley activity since the earliest days of enterprise AI and data science. His experiences cross industry and academia, from spearheading strategy at O’Reilly Media – a leading learning platform for technology and business – to lecturing in applied mathematics and statistics, advising tech startups and chairing conferences on the key issues around AI.

Lorica has a keen interest in the development of ethical AI and, in an exclusive interview with Digital Bulletin, laid out his own blueprint for trustworthy and reliable AI models. He starts by stepping away from the technology, going right back to the first steps of the process.

“Everything begins with data,” Lorica explains. “The AI models and technologies at the moment are data-hungry. Without good, high quality data at scale, it’s really hard to use these AI technologies and AI models.

“I’ve been giving talks about this – I think it’s important to get the foundational data technologies in place: data collection, data ingestion, storage and management, data preparation, cleaning and repair, data governance, data lineage – then, after that, maybe you can start using the data that you’ve collected to do basic things, like analytics or business intelligence. Then you start layering on machine learning and AI on top of that. It’s important for companies to understand that AI is not magical.”

Ensuring data quality requires investment in time and people, as reflected in the many facets of data management outlined by Lorica. Talent plays a crucial role – a more skilled and diverse pool of data scientists can help sort data in a fairer way.

“You have to make sure that your data is representative of the broader population you’re trying to address,” adds Lorica. “There are statistical approaches that you can use, and you should have people in your staff who are skilled at interrogating data and understanding if the data is representative and not biased.

“Another thing you can do is make sure that you’re staffed in a way that is a little more diverse. There’s a term “tech bro” – if you’re staffed mostly by young tech males, who graduated from certain universities, you’re not able to spot obvious biases in your data. I think companies are beginning to realise if they have a more diverse team, then some of these problems might be caught earlier on.”

We must run experiments in order to take the next step and these experiments take a lot of time; some deep learning models can take months to train

For the next stage of the process, Lorica focuses on the importance of “de-risking” the models themselves. Models, especially those for machine learning, are becoming increasingly complex as they deal with larger, unstructured datasets and incorporate cutting-edge software and automation.

According to research from McKinsey, algorithmic bias can be amplified in these advanced machine learning models, yet Lorica says a complete “de-risking” programme can help mitigate this problem and other issues, such as interpretability and security.

“Another way that people frame this general topic is “responsible AI” but one of the reasons why I prefer this whole notion of managing risk is that companies in some industries, particularly in finance, already have risk officers and already know the value of risk management.

“There are certain things around AI that you have to manage. One of the things is fairness and bias, but there are other things: privacy, security, reliability and safety, and transparency and explainability. People are realising that you have to have processes to manage these risks and teams that are a little more cross-functional. For example, you might want to bring your compliance team in earlier on in the process, to make sure that you’re not touching data systems that violate user privacy or are violating regulations.”

Tools around model lifecycle development, operations and monitoring are also critical pieces of the jigsaw, says Lorica, as well as model governance: “Model governance is the one I really want to get people interested in. What models do I have? Who trained these models? What data does this model use? And with model governance, you can have systems in place for model review, model testing and model validation.”

Lorica says UC Berkeley's RISELab is doing innovative work around machine learning and automation

Evidently, bias is only one of many factors to be wary of when deploying machine learning and AI. Thankfully, running parallel to the development of the models themselves is the evolution of tools to speed up the process – and even automate sections of it.

The automation of machine learning, known as AutoML, is one of the hottest topics in the industry today and Lorica argues that it is vital to free up data scientists to improve the reliability of models and continually develop new solutions.

“There’s automation happening at every stage of the machine learning and AI development process and some truly innovative work in this area,” he says. “HoloClean, for instance, is an example that uses state-of-the-art machine learning to automatically detect errors and repair your data. “Even more forward-looking, there’s a very interesting project out of UC Berkeley RISELab called Pandas on Ray.

“Pandas is a very popular library for data scientists, so what they’re doing with Pandas on Ray is using machine learning and techniques from a field in computer science called programme synthesis to automate the writing of Python programmes that rely on Pandas.”

Lorica says this points to a general trend where, instead of a typical data scientist having to master many different tools and libraries, they will have tools that allow them to get by without knowing the details of the APIs for each library. Automation will therefore let them build bigger programmes and models.

It’s easy to envisage a future – one not too far away – where AI and machine learning deployment grows exponentially. But Lorica is adamant we’re not quite equipped to reach that point yet – and that repeated experimentation is needed to further refine these technologies.

“We are still in a very empirical era for machine learning and AI,” he admits. “There’s a lot of experimentation and trial and error that people need to do. We must run experiments in order to take the next step in terms of building better models and these experiments take a lot of time; some deep learning models can take months to train.

“To accept that we can accelerate training time, that means that people can try out more ideas and explore possibilities more efficiently.”

For industry, gaining complete public trust in AI is a pivotal objective entering into the next decade. Overcoming issues like bias is just one part of it, yet with Lorica and his Silicon Valley peers intent on bringing AI-powered change to the enterprise, a harmonious future for man and machine might be nearer than we imagine.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE