Can we ever truly trust AI?
Over the past 10-20 years, the relationship between technology and society has become increasingly intertwined and complex. We now live in a highly digital world, with technology infused into practically everything we do at home and at work.
At the heart of this is artificial intelligence, which is increasingly being used by people and businesses alike to automate tasks, provide personalised experiences and generally make life easier for us all.
Such innovations have the potential to really improve people’s lives, and help build a digital society that enhances socio-economic progress and embraces everyone. But while powerful technologies like AI can have a profoundly positive impact, there is still an underlying current of concern and anxiety as to what a future with AI will look like.
Just as innovations in areas such as AI and Machine Learning are becoming ever more ubiquitous, stories of AI gone wrong are also increasingly filling the headlines, with AI blamed for everything from worsening bias in recruitment to false arrests. So, it’s not hard to understand why many are concerned that, left unguided, these technologies may end up creating a less fair world.
The benefits of AI in driving efficiency and productivity mean companies will continue to give more control and decision-making to AI. But this raises important questions they need to answer: how can they be held accountable when computer programs, rather than humans, are making decisions? And how can businesses reassure customers and employees that they’re still being treated fairly? Can we ever expect AI to have a heart?
Fears over the advancement of technology
Humans have always expressed an element of fear regarding the advancement of technology – from the Luddite movement in the early 19th century to worries over the ‘cybernation revolution’ in the 1960s. There are even reports from nearly 500 years ago that Queen Elizabeth I denied a patent for an automated knitting machine for fear it would take jobs away from young women.
While we’ve obviously come a long way since then, the fear at the root of our anxiety towards technology hasn’t changed all that much: what happens when we let a non-human force drastically influence society?
And now that technology is in every corner of our lives, our fear has grown significantly. When we spoke to businesses about whether they thought technology would have a detrimental impact on them, 44% agreed – with 39% mentioning the negative impact it could have on equal opportunities within the workforce.
The risk of in-built bias in AI systems is one of the key ethical issues businesses need to address. If not managed correctly, bias in such systems could lead to unfair practices in hiring, lending, or even other areas such as healthcare – something which could significantly impact a company’s reputation.
Businesses may also have concerns around the cost of AI – which can be expensive – as well as its complexity, which can make any issues or problems in the system harder to spot.
Our report also found that the concerns of organisations go beyond the business environment. This includes everything from the spread of misinformation (40%) and the loss of privacy due to data leaks (38%) to social isolation due to reliance on communication technology (38%).
This lack of confidence in technology’s ability to help us build a fair world is especially felt when people were asked about AI specifically. Businesses seem extremely pessimistic about the positive impact AI may have on society, second only to social media.
But the reality is that progress will continue to march on. The benefits AI tools offer enterprises, such as the ability to analyse large data sets rapidly and reduce the amount of menial work individuals have to do, are all too beneficial for businesses not to take advantage of.
So, how does a business remain competitive and continue to optimise its operations with the help of these tools, while also doing its part to ensure it’s always working towards its values and those of its people?
Trust through transparency
While the rate of technological progress continues on a steep upward trajectory, it’s crucial that businesses don’t forget they’re always going to remain accountable for the choices they make, even if they were influenced by AI.
Technology is a neutral force, so when it behaves badly, all it’s doing is exposing the will of others – from inequalities to biases that already exist. When it comes to AI, the heart wants what it has been programmed to want. This is why transparency is such a powerful cure to many of the inherent concerns people have around technology.
For instance, more ethical approaches to technology, like explainable AI – which enables humans to better understand how the system reaches its conclusions – can dispel its black-box nature. This greater transparency can be used to foster more trust in technologies like AI, as opposed to eroding it.
But companies need to remember that a major part of countering the inequities of technology falls outside the technical realm. Simple actions like ensuring everyone in a business has equal access and capacity to use technology are just as important. This is something 38% of our respondents said they are aware of.
It’s up to leaders to ensure the right education and policies are being put in place so everyone has equal opportunity to leverage all of the many benefits technology can offer. Because when done right, technology can often be used to mitigate its own negative consequences. As much as 60% of businesses said they’ve taken such measures in some area of their business, from health to environmental impact.
This type of thinking can – and should – be extended to all areas of enterprise technology impacts. For example, with job security a key concern for many, it’s important to ensure employees have access to training that develops value-added skillsets that can’t be replicated by automation, something which 41% of businesses say they are already doing.
As pressure from competitors and economic uncertainty continues to ramp up, it’s critical that business leaders don’t park their values when implementing AI. If AI is to ever have a heart, it will be because businesses manage to strike the balance between getting the most out of technology while also keeping people at the centre of their operations. Only at this point will people have full trust in the technology, ensuring long term success with widespread societal benefit.