09.12.2019

OPINION: AI Ethics Are Not Optional

09.12.2019

Ethical artificial intelligence and machine learning may sound like an undergraduate elective, but it is a topic that financial institutions need to address urgently.

Firms are exposing themselves to a new type of risk as they either develop AI and machine-learning models or rely on the growing number of third-party model providers.

Do these new models harm a specific subset of the population or unintentionally use practices that market regulators have deemed illegal?

It can be hard to tell since AI and machine learning engines are good at dealing with black and white, but are horrible when it comes to shades of gray.

These engines are only as good as the data that feeds them.

Most of the data sets used to train instances of AI and machine learning are so incredibly large that individuals cannot comprehend everything that might be in those data sets. If some or all of the training data is the result of previously biased behavior, it shouldn’t be surprising that the resulting models include a portion of that biased behavior.

However, making sure that AI and machine learning engines color within the ethical lines is exceedingly tricky when developers have to hardcode an abstract concept of “fairness” in precise mathematical terms.

When working on a paper regarding this topic, Natalia Bailey, associate policy advisor, digital finance at the Institute of International Finance, found approximately 50 definitions for fairness, she said during a recent AI summit in Midtown Manhattan.

Firms may think they have some time to sort this out as they did with data privacy issues before various states enacting their data-privacy regimes and the EU rolling out it General Data Privacy Regulation, they do not.

As Emma Maconick, a partner at the law practice of Shearman & Sterling and who spoke on the same panel noted, the law is ahead of the game respecting the liability a firm faces from a misbehaving AI. The well-trodden laws that address misbehaving children or employees, known as vicarious liability, also cover supervised and non-supervised AI engines.

If financial institutions have not incorporated an ethical analysis as part of their AI development process, there is no time to wait to do so.

It's been a month since we had our Women In Finance Awards in New York City at the Plaza! Take a look back tab some moments, and nominate for our upcoming awards in Mexico City and Singapore here: https://www.marketsmedia.com/category/events/

4

Citadel Securities told the SEC that trading tokenized equities should remain under existing market rules, a position that drew responses from various crypto industry groups. @ShannyBasar for @MarketsMedia:

SEC Commissioner Mark Uyeda argued that private assets belong in retirement plans, saying diversified alts can improve risk-adjusted returns and that the answer to optimal exposure “is not zero.” @ShannyBasar reporting for @MarketsMedia:

COO of the Year Award winner! 🏆
Discover how Jennifer Kaiser of Marex earned the 2025 Women in Finance COO of the Year recognition.

Load More

Related articles

  1. LSEG will provide AI-ready content, multi-asset class data and workflow solutions.

  2. OPINION: Artificial, Yes. Intelligent? Maybe.

    Employees can build AI agents to analyze financial reports, relevant data and historical trends.

  3. Innovation was not a word that banks used over the last couple of years, but it is back on the table.

  4. OPINION: Artificial, Yes. Intelligent? Maybe.

    The LSEG Everywhere strategy is to deliver trusted licensed data to scale AI in financial services.

  5. The Japanese group also completed the acquisition of Macquarie’s U.S. & European public asset managers.