09.12.2019
By Rob Daly

OPINION: AI Ethics Are Not Optional

Ethical artificial intelligence and machine learning may sound like an undergraduate elective, but it is a topic that financial institutions need to address urgently.

Firms are exposing themselves to a new type of risk as they either develop AI and machine-learning models or rely on the growing number of third-party model providers.

Do these new models harm a specific subset of the population or unintentionally use practices that market regulators have deemed illegal?

It can be hard to tell since AI and machine learning engines are good at dealing with black and white, but are horrible when it comes to shades of gray.

These engines are only as good as the data that feeds them.

Most of the data sets used to train instances of AI and machine learning are so incredibly large that individuals cannot comprehend everything that might be in those data sets. If some or all of the training data is the result of previously biased behavior, it shouldn’t be surprising that the resulting models include a portion of that biased behavior.

However, making sure that AI and machine learning engines color within the ethical lines is exceedingly tricky when developers have to hardcode an abstract concept of “fairness” in precise mathematical terms.

When working on a paper regarding this topic, Natalia Bailey, associate policy advisor, digital finance at the Institute of International Finance, found approximately 50 definitions for fairness, she said during a recent AI summit in Midtown Manhattan.

Firms may think they have some time to sort this out as they did with data privacy issues before various states enacting their data-privacy regimes and the EU rolling out it General Data Privacy Regulation, they do not.

As Emma Maconick, a partner at the law practice of Shearman & Sterling and who spoke on the same panel noted, the law is ahead of the game respecting the liability a firm faces from a misbehaving AI. The well-trodden laws that address misbehaving children or employees, known as vicarious liability, also cover supervised and non-supervised AI engines.

If financial institutions have not incorporated an ethical analysis as part of their AI development process, there is no time to wait to do so.

Related articles

  1. Futures Traders Embrace Algorithms

    Natural language processing find a front-office home.

  2. From The Markets

    Temenos Acquires AI Platform

    London-based Logical Glue has financial clients in the UK and Europe.

  3. Only one-in-four firms have such plans in place, according to an IDC study.

  4. OPINION: Artificial, Yes. Intelligent? Maybe.
    Daily Email Feature

    Early Days For Buy-Side AI

    40% of managers have yet to see an impact from the emerging technology.

  5. The firm expects to turn much of the KYC process over to the AI-based platform.