Pillar #2 of Market Surveillance 2.0: Past, Present and Predictive Analysis

Terry Flanagan

Pillar #2 of Market Surveillance 2.0: Past, Present and Predictive Analysis

In the second of a blog series outlining the Seven Pillars of Market Surveillance, we investigate Pillar #2 which emphasizes support for combined historical, real-time & predictive monitoring.

By Theo Hildyard , Software AG

Following my last blog outlining the Pillar #1 in the Seven Pillars of Market Surveillance 2.0 – a convergent threat system, which integrates previously siloes systems such as risk and surveillance – we continue to look into the foundations of the next-generation of market surveillance and risk systems.

Called Market Surveillance 2.0, the next generation will act as a kind of crystal ball; able to look into the future to see the early warning signs of unwanted behaviors and alerting managers or triggering business rules to ward off crises. By spotting the patterns that could lead to fraud, market abuse or technical error, we may be able to prevent a repeat of the recent financial markets scandals, such as the Libor fixing and manipulation of the Foreign Exchange benchmark.

This is the goal of Market Surveillance 2.0 – to enable banks and regulators to identify anomalous behaviors before they impact the market. Pillar #2 involves using a combination of historical, real-time and predictive analysis tools to achieve this capability.

Historical analysis means you can find out about and analyze things after they’ve happened – maybe weeks or even months later. Real-time analysis means you find out about something as it happens – meaning you can act quickly to mitigate consequences. Continuous predictive analysis means you can extrapolate what has happened so far to predict that something might be about to happen – and prevent it!

Theo Hildyard, Software AG

Theo Hildyard, Software AG

For example, consider a trading algorithm that has gone “wild.” Under normal circumstances you can be monitoring the algorithm’s operating parameters, which might include what instruments are traded, the size and frequency of orders, order-to-trade ratio etc. This data comes from historical analysis.

Then, if you detect that the algorithm has suddenly started trading outside of the “norm,” e.g. placing a high volume of orders far more frequently than usual without pause (a la Knight Capital), then it might be time to block the orders from hitting the market. This is real-time analysis and it means that actions can be taken before they go too far and impact the market. This can save your business or your reputation.

If the trading algorithm dips in and around the norm, behaving correctly most of the time but verging on abnormal more times than you deem safe, you can use predictive analytics to shut it down if this happens too often. In other words, you can predict when your algo might be verging on “going rogue” if it trades unusually high volumes for a microsecond, then goes back to normal, then again trades too high volumes.

The trick is to monitor all three types of data to ascertain if your algo was, is or might be spinning out of control. An out of control algo can – and has – bankrupt a trading firm.

Adding Pillar #2 to Pillar #1 gives you complete visibility across all of your siloed systems, data and processes while monitoring for events that form patterns in real-time or compared with history in order to predict problems.


Related articles