Striking the balance with AI

Even as almost every facet of our lives was disrupted by the events of 2020, technological advancement has continued apace, and innovation in artificial intelligence (AI) has shown no sign of slowing. 

In February 2020, the European Commission released its “White Paper on AI”, acknowledging the inevitable impact this incredible technology will have on our lives, revolutionising everything from healthcare, to agriculture, to climate change mitigation. 

In order to achieve this bright future, the paper outlines the vital importance of establishing an ‘ecosystem of trust’ in Europe, with AI regulation in place to protect human rights, in particular with relation to data privacy and consent. Without this, AI’s potentially unknown risks threaten to overshadow its many life-improving benefits. 

However, we must also ensure that innovation is not hamstrung by broad-brush regulation. Everyone – from leading industry voices to the regulatory bodies themselves – must remain conscious of how we can build trustworthy AI, without stymieing its advancement. 

Here are four ways in which AI regulation can be optimised to ensure both safety, and innovation, are protected.

Defining edge- and cloud-based AI

All deployments of AI should not be treated equally, and a clear distinction between edge- and cloud-based AI should be made in order to avoid rigid, catch-all regulatory measures. The reason for this is simple: many of the issues facing cloud AI can be solved with an edge deployment. 

Cloud-based AI is an application on a device that captures data (it could be a camera collecting an image, or a microphone collecting sound) which is then sent to remote servers to be processed. Once processing is complete, the data is returned to the device. 

Edge-based AI, on the other hand, is a deployment where all the data remains on the device. The processing resides within the unit itself, which is then sent back to the user. 

The benefits of this approach are clear – processing is faster because the data doesn’t need to travel so far, cost is lower because server farms are expensive and require a lot of energy, and arguably most importantly, privacy is more easily protected when user data isn’t shared remotely.

With this in mind, edge-based AI should be considered to have a lower risk and treated as such. Failure to make this distinction would lead to unfair regulation that hinders innovation and privacy enhancing solutions.

Consider sectoral nuance

Similarly, any regulation, risk models or frameworks put in place to deliver trustworthy AI must pay close attention to the sectoral nuance and purpose of an AI application. 

The European Commission makes a positive start in this regard, as outlined in its white paper. It promotes a risk-based approach focusing on deployments that could be considered ‘high risk’, with this defined by what is at stake, as well as the sector and intended use case. 

The two cumulative criteria to determine ‘high risk’ are designed to deliver a level of nuance. The first is dependent on whether the chosen sector has significant risks, while the second covers the specific use in said sector. Healthcare, for example, is a high-risk sector for obvious reasons. But not every healthcare AI deployment will pose a risk to users. 

The European Commission clearly understands the importance of specificity when assigning risk to AI deployments. That said, additional criteria should be added to ensure there is an even greater degree of consideration as to the idiosyncrasies of each use case. 

For example, if data never leaves a device (as in an edge-based deployment), and the device’s AI system meets industry standards, then why should it not avoid further regulation? With a technology as far-reaching as AI, there is no such thing as too much nuance.  

Sectors and the solutions deployed in each sector can vary dramatically and there are different business reasons and ethics associated between sectors and solution providers. Not all companies have a business model for unethical data usage or direct marketing where regulation should take place to protect human rights and privacy. 

Life-enhancing solutions using AI that meet standards should not be treated with the same regulatory view as companies with business models for unethical data usage or direct marketing, for example.

Use GDPR to ensure data privacy and consent

The fundamentals of human rights, including data privacy and consent, are the bedrock of EU policy. This must be reflected in regulation if AI is to reach its fullest potential. Thankfully, existing EU regulations, such as GDPR, give a clear structure for the industry to work within.

Take Soapbox Labs, for example. Soapbox Labs builds voice technology for children, a field in which fears around data privacy and consent are extremely pronounced. 

During a recent Webinar, where industry leaders and European Parliament members gathered to discuss how best to achieve trustworthy AI, Dr. Patricia Scanlon, Founder and CEO of Soapbox Labs, outlined how GDPR guidelines have helped the company build safety and privacy into its technology at the design stage. 

Not only that, but the use of consent within GDPR allows the benefits of technology advancement to be used and enjoyed by consumers who choose to do so – something that should continue to be embraced.

Tackle shared datasets across public and private sectors

A common and shared base dataset across public and private sectors for AI testing standards can alleviate certain concerns, in particular regarding discrimination in data use cases, that would contribute to an ecosystem of trust. 

This, however, comes with a major caveat: a complete levelling of the playing field across public and private sectors will devastate businesses’ ability to innovate. 

Organisations make significant investments in larger, more complex datasets in order to seek differentiation from competitors, delivering technology with greater quality, performance and safety than this shared base set could ever provide. 

Not only do these advanced datasets form part of organisations’ intellectual property, but they are the backbone of innovative solutions. Taking away this differentiator may alleviate some concerns, but AI development, and the benefits it promises, will be far poorer as a result. 

Moving forward

The risk remains that over-regulation and insufficient consideration around the future of AI may hamper its progress. However, through sensible regulation and international and transatlantic collaboration, Europe has a fantastic opportunity to enhance its position as a global leader in a field with almost unlimited potential. 

Xperi’s Gabriel Cosgrave has been bringing technology to the Telecoms, Entertainment & Consumer Electronics industries since the 90’s. Working towards a shared vision brings a win-win outcome for partners whether TV, Telco, Entertainment, Consumer or Monetization areas is his firm belief

Scroll to Top

SUBSCRIBE

SUBSCRIBE