09.12.2019

OPINION: AI Ethics Are Not Optional

09.12.2019

Ethical artificial intelligence and machine learning may sound like an undergraduate elective, but it is a topic that financial institutions need to address urgently.

Firms are exposing themselves to a new type of risk as they either develop AI and machine-learning models or rely on the growing number of third-party model providers.

Do these new models harm a specific subset of the population or unintentionally use practices that market regulators have deemed illegal?

It can be hard to tell since AI and machine learning engines are good at dealing with black and white, but are horrible when it comes to shades of gray.

These engines are only as good as the data that feeds them.

Most of the data sets used to train instances of AI and machine learning are so incredibly large that individuals cannot comprehend everything that might be in those data sets. If some or all of the training data is the result of previously biased behavior, it shouldn’t be surprising that the resulting models include a portion of that biased behavior.

However, making sure that AI and machine learning engines color within the ethical lines is exceedingly tricky when developers have to hardcode an abstract concept of “fairness” in precise mathematical terms.

When working on a paper regarding this topic, Natalia Bailey, associate policy advisor, digital finance at the Institute of International Finance, found approximately 50 definitions for fairness, she said during a recent AI summit in Midtown Manhattan.

Firms may think they have some time to sort this out as they did with data privacy issues before various states enacting their data-privacy regimes and the EU rolling out it General Data Privacy Regulation, they do not.

As Emma Maconick, a partner at the law practice of Shearman & Sterling and who spoke on the same panel noted, the law is ahead of the game respecting the liability a firm faces from a misbehaving AI. The well-trodden laws that address misbehaving children or employees, known as vicarious liability, also cover supervised and non-supervised AI engines.

If financial institutions have not incorporated an ethical analysis as part of their AI development process, there is no time to wait to do so.

Related articles

  1. OPINION: Artificial, Yes. Intelligent? Maybe.

    Market participants across the globe face a growing list of challenges in trade settlement.

  2. The bank will use its new system to deploy AI applications and manage AI infrastructure.

  3. How APIs are Changing the FinTech Narrative

    BNP Paribas and Citi are the first institutional investors in the digital transformation platform.

  4. Nearly one fifth og patents granted last year related to AI and machine learning.

  5. Cloud infrastructure enables adaptation to emerging technologies such as generative AI.