Delivering financial ethics in the age of AI

Ethics should never be an afterthought when deploying artificial intelligence (AI) systems in finance. To the contrary, embracing a culture of ethical conduct among AI development teams contributes to systems that are more effective in the long run, keeping customers and shareholders comfortable.

The AI gold rush has been underway in the financial services industry for the past few years. According to the UK Financial Conduct Authority and the Bank of England, two-thirds of Britain’s financial services firms use some form of machine learning. Just over half have an R&D strategy to add even more to their AI capabilities. That strategy needs a well-developed ethical component.

Financial regulators can hardly be expected to keep up with an industry that by the day devises novel techniques to exploit the power of automation, such as using AI to optimise risk modeling in insurance, authenticating transactions or automating pre-trade analysis. The U.S. Securities and Exchange Commission (SEC) uses an in-house natural language processing system to flag potential misconduct in filings, which the agency says makes its human auditors five times more effective at their job. That’s a huge advance, but it falls short of rethinking the agency’s broader mission in light of what’s possible with AI. As the agency itself notes, it will take time to replace all the old-fashioned requirements like paper filing of some reports.

Regulation happens at the speed of the Administrative Procedures Act, which was intended to slow the pace of governance so that the public – and sometimes the courts – would have an opportunity to weigh in. Innovation, on the other hand, doesn’t wait on the schedule of the printing press at the Federal Register.

Joseph Byrum

That means it’s up to fintech companies to do what’s necessary to implement common sense ethical guidelines when deploying automated solutions. Fortunately, most of the hard work on that front has already been done. The Institute of Electrical and Electronics Engineers (IEEE) created principles for what it calls “Ethically Aligned Design” which are generally applicable across most industries. These are eight concepts that, if followed, ought to keep financial companies that implement AI solutions out of trouble.

Briefly, the Ethically Aligned Design principles are:

– Create systems that respect human rights.
– Make increased human well-being the measure of success in AI development.
– Give individuals the right to access and share their own data.
– Prove the AI system is effective.
– Be sure you can document and explain the reason behind each of the autonomous systems’ decisions.
– Make sure accountability is built into the system so that people can be held responsible for mistakes.
– Build safeguards to prevent misuse.
– Make sure operators of AI systems are properly trained to use the systems properly.

They’re not supposed to be deep insights. Rather, they serve as critical reminders about what must be done to keep AI projects from going down a dark path. And these ideas take on particular significance for an industry that holds the life savings of millions of people in its hands, with all of the privacy and security implications that entails.

The good news is that there’s plenty of time for the industry to get this right. Despite widespread use of various forms of AI technology in finance, we’re very much in the early days of reaching the full potential of what autonomous systems can do. If an early system happened to be designed without ethics in mind, it’s probably time for a change anyway. Systems that today might be considered “old” should probably be replaced. In terms of hardware capabilities, the speed of processing AI algorithms has quadrupled in less than two years, while software capabilities are expanding just as fast. The best time to validate that ethical concerns have been fully addressed is before any autonomous system is deployed. That’s how problems are avoided.

Shedding light on the black box

The classic ethical dilemma applicable to most forms of machine learning is known as the “Black Box” problem. An algorithm “learns” by scanning datasets and self-adjusting its internal search parameters until it can detect patterns with the required level of accuracy. Such algorithms can be used by humans who have no idea how the machine arrived at its answer. It just works.

But in finance, such a lack of transparency can create big problems. Why did that algorithm flag this credit card purchase as suspicious? Customers don’t appreciate the inconvenience of having legitimate transactions declined. Or, far worse, an unchecked algorithm could inadvertently develop race-based patterns for mortgages, raising not just ethical but also serious legal issues.

One of the most effective ways to minimise the chance of this happening is to prefer augmented intelligence solutions over simple machine learning options. These are solutions that automate the collection of information and provide expert system analysis with recommendations presented to human operators who are entrusted with the judgment call about what to do. This arrangement addresses the IEEE concerns about transparency and agency because the human remains in control and can explain the rationale behind each decision. Humans can be held to account for their conduct.

The ethical landmines with AI

While augmented intelligence represents the most effective approach to ensuring ethics in AI, that’s just one of the tools in the toolbox. Others will still be needed.

Perhaps the industry’s most widespread form of automation is algorithmic trading. These systems are so advanced that high-frequency traders found value in the minuscule difference between the time it takes for a beam of light to travel down a fibre optic data cable and for an electromagnetic wave to bounce from one microwave tower to the next. A few additional milliseconds means beating the competition, yet no human could possibly act on such an imperceptible difference.

Ethics can’t be an afterthought if we’re going to see the AI revolution turn into the Industrial Revolution for finance

The danger of uncontrolled algorithms is obvious. According to a disputed SEC report, a badly coded routine at a single trading firm triggered a trillion-dollar “flash crash” in 2010. This algorithm was set up to sell E-mini S&P 500 futures contracts on the Chicago Mercantile Exchange (CME) based on trading volume, rather than price. On an already bad day for the market, the algorithm began selling thousands of these popular contracts which were instantly snapped up by other high-frequency traders. Algorithms began automatically selling those positions back and forth within a matter of minutes. The increased trading volume triggered yet more selling. Over just 20 minutes, the original firm off-loaded $4.1 billion worth of the E-mini contracts.

With contracts flooding the market far beyond the demand for them, prices plunged and the decline spread to the equities markets. A trading halt on the CME stopped the downward spiral that by then had dragged the Dow down to a 1,000 point loss for the day. This gave humans the breathing space needed to assess the situation and allowed the markets to recover. It was a flash crash that could have otherwise been a collapse.

A few years later we learned there was more to the story. This January, a rogue British trader was sentenced to house arrest for spoofing trading software in a way that intentionally created artificial market volatility during the flash crash. Some $200 million in pressure was applied by placing and canceling orders to drive down the E-mini price, triggering other algorithms that produced the downward spiral. The British trader’s misconduct earned him $12.9 million, an amount later seized along with an additional $25 million fine.

A dynamic approach to financial ethics

Ethics needs to apply at the speed of light to discourage bad actors from using AI to further their schemes. Since it makes sense to regularly refresh AI systems anyway, that presents a great opportunity for businesses to audit their systems for ethical operation or rebuild them from the ground up with the proper safeguards in place.

IEEE’s design principles tell AI developers that they need to think about more than just writing a great algorithm to increase alpha. They need to consider scenarios that could abuse the privacy of customers or be exploited to make a little extra cash on the side. It’s tempting to think that once a system has been built, the job is done. That doesn’t work with AI. Constant testing, validation and monitoring by humans has to be part of the culture of businesses that want to take part in the AI revolution.

But humans don’t need to deal with this problem on their own. Automated tools are ideally suited to assist humans in monitoring what other algorithms are up to. They can tirelessly watch for patterns of abuse and report on investment performance, alerting humans at the first sign of anomalies. These AI systems can also track and enforce compliance.

Because AI systems designed according to IEEE’s principles are tested and validated for effectiveness, they’re ultimately going to yield better results. That’s why ethics can’t be an afterthought if we’re going to see the AI revolution turn into the Industrial Revolution for finance.

Joseph Byrum is chief data scientist at Principal. Connect with him on Twitter @ByrumJoseph.

18__2680_byrum_joseph_190128_11-edit.894b0b777aae

Joseph Byrum

Chief Data Scientist at Principal

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE