01.29.2019

The Truth About Machine Learning

01.29.2019

By Kristina Fan, CEO and Founder, and Roy Lowrance, Chief Scientist, 7 Chord 

In capital markets, machine learning and AI hold tremendous promise and are becoming increasingly useful in predicting the behavior of financial instruments. However, every year many machine learning projects fail, some spectacularly.

What was the biggest AI failure of 2018? It may have been the erroneous predictions of the FIFA World Cup by several prominent investment banks. Or maybe it was the poor performance by many prominent quant funds. While these stories are stealing the headlines, many AI projects die quietly – for reasons that are quite human in nature.

For the past three years, 7 Chord has been building BondDroid™, a proprietary AI system that predicts prices of corporate bonds and produces trading signals for bond traders and investors. 7 Chord made many wrong turns and broke its own rules along the way, but eventually created an AI system that can truly help market participants.

These days, Wall Street firms that are trying to launch data science efforts are struggling with the culture shock and scarcity of talent in the field.

Given that the lessons we learned are applicable to any machine learning project in finance or elsewhere, we wanted to share our perspective. They center around three key decisions: When to bother with machine learning, how to manage a machine learning project without losing face, and who to hire if you are building machine learning in-house.

When to Bother with Machine Learning?

Let’s face it: many organizations venture into machine learning because doing so makes them look innovative. But what if 80% of the value could have been extracted with a traditional statistical model? True, with machine learning you might get more accurate results, but how much are you willing to pay for the marginal benefit in cash and political capital? The latter will evaporate quickly if the marginal ROI is low.

In fact, traditional predictive models have been an important enabler of trading in financial instruments for a long time. These models often have at their core a mechanistic view: the relationship among variables is modeled in terms of processes that are hypothesized to have given rise to observed data. The tools needed to build these kinds of models have included calculus, linear algebra, differential equations, probability, and statistics.

Supervised machine learning and mechanistic models have some similarities and differences. Often, the models implemented in practice are a combination of the two.

Mechanistic model vs. machine learning model

One of the most famous mechanistic frameworks is Black-Sholes-Merton, an options-pricing model that is still in widespread use today. Anyone who has taken a college-level financial engineering course knows that BSM is based on many assumptions that do not hold in real-life, including market completeness, continuous hedging, and an absence of transaction costs. Many researchers and practitioners wish for a method that doesn’t make simplifying assumptions about the price formation mechanism. They have tried to build machine learning models that learn option pricing from data to overcome BSM’s limitations.

With the mechanistic model, a designer needs to make decisions about the exact relationship between an input and output variables, upfront.

What needs to be specified upfront? Mechanistic Model Supervised Machine Learning
Input Data Yes Yes
Prediction Target Yes Yes
Exact Relationship between input and output variables Specified Learned
Functional Form of the Model Yes Yes
Optimization Criteria n/a Yes

The skill set required to construct a mechanistic or a machine learning model is fairly similar. Machine learning experts can construct mechanistic models and traditional quants can learn machine learning with a bit of effort.

The biggest difference, however, is philosophical.

Mechanistic model assumes that a human architect is capable of producing a full quantitative representation of the process. Machine Learning experts believe that a machine can sometimes outperform a human expert in identifying the salient relationships among features and targets.

So why is a human expert still needed to build a machine learning system? Human experts are essential, because sometimes the data available to an architect doesn’t fully represent economic reality or, worse, distorts it like a fun house of mirrors.

Bond markets are a great example.

Trade Reporting and Compliance Engine (TRACE) is a program developed by the National Association of Securities Dealers (NASD) that allows for the reporting of over-the-counter (OTC) transactions pertaining to eligible fixed-income securities. Under TRACE, every trade in the bond market must be reported to FINRA within 15 minutes of pricing.

TRACE is the consolidated tape of all executed transactions in the disjointed corporate bond market and, since it was introduced in 2002, has transformed this space.

However, it is not a perfect dataset. Transactions are often reported out-of-sequence, canceled, or corrected, and prices are not normalized and can’t be compared directly across different financial instruments. Most importantly, TRACE data doesn’t capture the repricing happening in the minds of the market participants between occasional prints. A data scientist faced with such a dataset needs to have a sufficient market knowledge to read between the lines and mitigate its shortcomings.

We have done this quite successfully by normalizing data, introducing data anomaly detection module, as well as supplementing TRACE with several other sources.

A lot has been written about the challenges of working with small or imperfect datasets. If you can’t mitigate data challenges easily, it is almost better to go with a simple, well-understood, although imperfect method than to derive a false sense of accomplishment from using a more advanced algorithm.

So, mechanistic models, despite all their limitations, could be useful in the markets where data is not available or is so poor that informative features can’t be constructed. Mechanistic models also provide a good starting point for machine learning development. Along with the interviews of the human domain experts, they represent a point of departure for machine learning work and a benchmark of comparison to the machine learning produced results.

If we still haven’t talked you out of it, stay tuned for our next post: How to manage a machine learning project without losing face?

Related articles

  1. Summer Trading Network 2016
    Daily Email Feature

    Trends in Trading

    Insights from two recent industry conferences provide a snapshot of the state of innovation on the trading des...

  2. DreamQuark provides enhanced advising, strengthened compliance, and smart document retrieval.

  3. Daily Email Feature

    Shining Light on Liquidity Lamp

    With Andy Lee, Director of Quantitative Research, Exegy

  4. Banks can seize an advantage by harnessing the power of untapped transaction data. 

  5. Collaboration aims to empower traders to anticipate market trends and mitigate credit risk.