BanklessTimes
Home News Governments Must Strike Right Regulatory Balance for AI to Succeed

Governments Must Strike Right Regulatory Balance for AI to Succeed

News Desk
News Desk
News Desk
Author:
News Desk
The latest news, comment and analysis from our crypto news desk.
January 31st, 2023

As governments address the various issues raised by artificial intelligence (AI), they must be mindful to not stifle an important technology through over-regulation.

Hani Hagras said he has been preparing for regulation for some time. Dr. Hagras is the chief science officer for AI at Temenos, a leading provider of financial software. He is also a professor of explainable AI at the University of Essex.

One issue regulators struggle with is properly defining the concept, he began. Some countries try to address it within existing regulations while others have avoided dealing with it.

Academics have been warning us about this exact moment for two decades, Dr. Hagras said. No one acted because back then there wasn’t the amount of data we have today so there was no sense of immediacy. We also didn’t have today’s level of connectivity so there was no action.

Fast forward two decades and things have changed. We have all the data we could ever need and more is on the way. And boy are we ever connected. Society also has high service expectations, including for financial companies, and AI can help.

“People use devices and still want advice,” Dr. Hagras said. “AI can help. It has access to your device. It’s monitoring you 24-7. It should be able to take a look at your finances and begin to do a plan for you, begin to give you the best options.”

But because people do not know how AI operates, many don’t trust it with their personal data. Increase that transparency and provide better education and we can close the gap between perception and reality.

Two of the areas where we need to clearly see under the hood are recruitment and credit decisioning, Dr. Hagras said.

“So if you have a system that’s going to decide your future, give you a job or no, give you a credit score or no, give you a mortgage or no, and you have no idea how it works, you don’t have any kind of proof if it discriminates or not in certain sectors of society. This can be catastrophic,” he said.

Recent news items highlight the importance of showing how the technology works. Concerns about Apple Card credit limits, an AWS recruitment engine that showed gender bias, and a Microsoft chatbot that could make racist and sexist comments if trained to do so suggest strong oversight is needed throughout.

And we cannot solely rely on data either, Dr. Hagras cautioned. Data has only been around for a short while compared to accumulated human expertise. An optimal AI system will incorporate both. Failure to act could lead to the sidelining of technology which should benefit humans in many ways.

Don’t act and it is subjected to fickle forces like public sentiment, which is often governed by media reports of incidents like the above and others like setbacks with self-driving cars. Add in pop culture-influenced misconceptions and this could go off the rails quick.

“If a lot of these bad incidents happen, it will defame artificial intelligence and we’ll lose a huge opportunity for mankind to jump to a new industrial revolution,” Dr. Hagras said. 

Transparency should begin at the earliest stages of a system’s design so it is certain your assumptions, the data and the system design are all aligned.

“If you’re only trained from data your model is only as good as the data,” Dr. Hagras explained. “If you don’t have any way of making this model transparent, so you can audit it, so you can begin augmenting it with human expertise to rectify any shortcomings with the data, catastrophes can happen.”

While we are beginning to learn about how “data is the new oil” and how companies exploit our personal data for profit, some people don’t seem to care. If they get good deals on eBay and stay up to date with old friends on Facebook they couldn’t care less.

That’s a big mistake, Dr. Hagras warns. With only a little bit of data fraudsters can open a bank account in your name and ruin your credit score. Give those same people even more and they can essentially digitally clone you.

A transparent system will be designed to protect the public and it will explain how their data is used. Auditors, regulators and end users must clearly understand. In this case you begin having a say, Dr. Hagras said.

Temenos’ Explainable AI platform does precisely that, Dr. Hagras said. Engineered for business users and data scientists, it provides a complete understanding of the rules determining how the data is used. If a human being identifies a weakness in the data, the system allows for that input, and they will understand why certain actions are taken. Applications in financial services include analyzing a user’s cash flow to help them save for a large purchase. If an applicant was marginally rejected the system can suggest ways they can improve.

“You get a score, you get a complete understanding,” Dr. Hagras said.