FLASHBACK FRIDAY: Re-Thinking Pre-Trade09.29.2017
It’s time for change.
When evaluating the trade process, the brokers and now independent vendors are carefully helping the buy side dissect each piece to see where the best value, highest alpha and least amount of risk lay. It has undergone significant change in the last several years. But one thing remains – the importance of performing a thorough pre-trade examination.
“Pre-trade checks, as always, are about understanding all the dynamics at play that drive price discovery for any given order, including liquidity characteristics of the instrument itself and similar influences at the sector and market level,” said Frank Freitas, Pluribus Labs CEO and formerly Global COO at Instinet. “The scope of information that goes into this understanding has increased steadily with time.”
Pluribus distills powerful predictive analytics from a variety of unstructured data sources. According to the firm, “their solutions add significant value to a variety of portfolio management and trading workflows.”
As an example, in Pluribus’ own research, the firm sees real and systematic intraday linkages between conversations about securities on social platforms and trading behavior of those securities.” Back in 2005, social media was in its infancy and with the trading world largely ignoring the medium and platforms when it came to discussing trade ideas or companies. As a matter of fact, Twitter, one of the biggest social media platforms and is in heavy use by traders, wasn’t founded until March 2006.
“The challenge is to consume these disparate sources in a streamlined workflow that enables strategy selection and order entry in a timely, efficient manner,” Freitas said.
This article originally appeared in the September 2005 issue of Traders Magazine
Rethinking Pre-Trade: Brokers re-write the rules of pre-trade cost analysis
By Peter Chapman
Trade cost prediction doesn’t get much respect, but that isn’t stopping the Street’s equity shops from rolling out new models. This summer, three of the largest institutional brokers launched efforts to bring the nascent science of pre-trade transaction cost analysis to their buyside customers. Credit Suisse First Boston, Piper Jaffray and Lehman Brothers are all debuting new systems and/or integrating their technology with buyside order management systems.
None of the three claims to have come up with a methodology that forecasts a trade’s cost with 100 percent accuracy, but all say they have improved upon the status quo.
“The general impression in the industry is that the practice of estimating transaction costs is not very worthwhile,” says Piper exec David Mortimer, “so we do it differently.”
Execs at CSFB echo Mortimer’s comments. “Most of the models I’ve seen are not based on any scientific research,” comments Merrell Hora, a quant in CSFB’s advanced execution services group. “They are just assumptions that are convenient for computation. It’s garbage in; garbage out. That approach gives you the results we hear about.”
Those results are believed by the experts to be in the 11 percent to 12 percent range. That means that a pre-trade cost estimate is accurate no more than 12 percent of the time.
“Figuring out transaction cost with [market] movement is the equivalent of figuring out price,” notes trade cost maven Wayne Wagner of Plexus Group. “And if you can figure out price, there are more lucrative ways of putting that idea to work.” In other words, if one can accurately forecast the expected cost of a trade, he or she could earn more trading for a hedge fund than servicing clients.
The level of accuracy generally improves when estimating cost for an entire basket of trades, most sources agree. That’s because the law of averages is likely to bail out an intrepid forecaster. Among a basket of 200 names, 100 names may trade at prices worse than expected, but the other 100 might do better. The overall cost may not vary excessively from the forecast.
Estimating costs for baskets is not new. The practice developed on the blind-bid desk of Salomon Brothers with the creation of its StockFacts Pro cost and risk forecasting product. Estimating costs for single stocks however is now the focus of much industry research. The effort coincides with the growth in algorithmic trading. Brokers are pitching pre-trade for single stocks to the buyside as a way to help them pick the appropriate algorithm.
Both Piper and CSFB are launching new products. Lehman came out with a single-stock pre-trade offering earlier this year, and recently announced its availability in the Bloomberg terminal. Most of the larger houses now include pre-trade cost analysis among their services for money managers.
Below are some highlights of the brokers’ efforts.
Credit Suisse First Boston
For Merrell Hora, the architect of CSFB’s new ESP cost estimator, the biggest challenge in predicting trading costs is isolating the effect that a specific trade has on the market. The slippage, or market impact, incurred during a trade may not be wholly attributable to that trade. Other trades are likely responsible for some of the movement of the stock during the time period in question.
“Execution performance data reflects everything that is going into the market,” Hora notes.
Therefore, according to Hora, to determine just how much slippage a particular trade is responsible for, an analyst needs to examine that trade in a vacuum. To do so, he must comb historical data to find a similar trade, strip it out of the market activity and calculate how much slippage would’ve occurred anyway. The difference between this imputed price and the actual fill is the slippage of the trade under examination.
“The system goes back through fill and tick data,” Hora notes, “and imputes what prices would’ve been had this particular trade not occurred.”
CSFB’s system, in order to locate a like trade, plumbs the depths of a trade database belonging to CSFB’s advanced execution services group. The four-year old database contains millions of trades done through CSFB algorithms. Once the system locates the trade, it then strips it from the mass of historic market data it maintains in a separate database. Like most large shops, CSFB imports tick by tick data from the nation’s exchanges hourly.
Hora presented his methodology recently at an academic conference called the “Bayesian Inference in Econometrics and Statistics.”
Hora maintains that most pre-trade systems in use on the Street are based on spurious assumptions, unsupported by sound research. The methodologies are “tractable,” or easy to manage, Hora says. “It gives them nice answers that are convenient to compute. They generally agree with intuition, but are not based on any scientific research. They are very arbitrary.”
At the same time, though, the usefulness of the costs determined by CSFB’s process is limited. That’s because the process, in deriving its cost estimates, assumes a market existing in a state of “equilibrium,” according to Hora. “Everything is balanced,” explains Hora. “Every buyer is met by a seller. When we enter the market, we push it out of balance. We are the marginal trades.”
This methodology, by refraining from predicting costs based on actual market conditions, presents the trader with more useful information, Hora believes. Most other methods, using real market conditions, imply the ability to forecast prices. That, Hora believes, is impossible.
“We don’t want the product to imply we are forecasting prices,” Hora says, “because we really can’t. The reason our competitor’s products are not accurate is because they are forecasting something that is not very forecastable.”
Like CSFB, Piper maintains the methodologies used by its competitors to estimate trade costs are inherently flawed. The broker cites three supposedly common assumptions incorporated by others’ models that render them ineffective.
First, these models assume that market impact is the same for both buys and sells. Piper believes the two types of trades impact the market differently. Second, the models distinguish between a so-called temporary market impact and a permanent market impact, arbitrarily assigning percentages to the two components. Piper discounts the notion of temporary impact.
Third, the models assume market impact is proportional to the size of the order. Piper maintains market impact does increase with the size of an order, but the relationship is not linear. At some point, market impact levels off.
Piper’s rethinking of these assumptions makes its methodology significantly more accurate than the models of others, according to execs at the broker-dealer. Still, they concede the limitations of single-stock cost estimates. “A pre-trade estimate in a single name is a reference point,” notes exec Dave Mortimer. “That’s all the buyside uses it for.”
Piper built its system in conjunction with QSG, a trading cost consultancy and software developer based in the suburbs of Chicago. QSG was founded by former Donaldson, Lufkin & Jenrette quants John Wightkin and Tim Sargent. Sargent also worked at Salomon Brothers where he developed the well-known StockFacts Pro package.
The methodology supporting QSG’s post-trade cost measurement services is central to Piper’s pre-trade offering, according to Piper. QSG analyzes its customers trades tick by tick in an effort to properly allocate market impact costs. It attempts to determine how much of the movement in a stock was caused by the trade in question and how much was done by someone else’s trade.
“Tick distribution is key,” notes Piper’s Scott Wilson of his firm’s pre-trade model. “We try to determine: What is the likelihood of each tick being higher or lower than the previous tick? All the other models are based on volatility. We use volatility, but it has a much lower weighting.”
Mortimer adds: “We are trying to figure out some degree of direction. I wouldn’t call it a momentum model. But we are trying to look at every single tick.”
QSG’s database is a key reason Piper chose to partner with the consultant. QSG obtains trade data from several money managers and plan sponsors. The existence of the database allows Piper to compare its estimates with the actual trades on a daily basis. “That grounds our pre-trade offering in empirical analysis,” says Wilson. “It’s not just based on theory and volatility.”
Mortimer adds: “If we had that data ourselves, we probably would’ve done this on our own. That data is irreplaceable. Clients would not give us that data.”
Lehman, relatively speaking, is an old hand at pre-trade cost estimation. The bulge player built a cost forecasting system for basket trades two years ago. It moved into the single stock arena earlier this year.
Single stocks are a totally different ballgame, according to David Cushing, a Lehman managing director of equity analytics and algorithmic trading. The “variance,” or amount a forecast can be off, rises dramatically with single stocks.
The forecast variance goes up as the number of names go down,” Cushing points out. “With 300 names, for instance, the uncertainty is much lower. But that variance problem is inherent. There’s not much you can do about it.”
Cushing emphasizes though there is still value in estimating trading costs for single stocks. By incorporating the discipline into his workflow, a trader is able to create a framework for pricing a trade. That’s especially true if a trader needs capital. “If it is being priced rationally,” Cushing explains, “it should be priced somewhere within the vicinity of that estimate.”
In building its single stock cost estimator, Lehman created a number of analytics it had not needed for basket trading. Unique to the single stock application are ways of representing liquidity as well as programs that try to identify unusual activity in the market.
The broker, for example, added a chart that shows trade volume unfolding over the course of a day and how it compares with historical patterns. The firm also constructed a trade activity monitor.’ The program totes up the last ten minutes of volume and compares it to a typical ten-minute slice.
The program takes a similar look at the change in a stock’s price, or its return.’ The program compares the return of the past ten minutes with a typical ten-minute return. If ratio is unusual, the trader is alerted.
With its systems built, Lehman’s main thrust has been to push them out to its customers. To that end, it has been integrating its single-stock product with various buyside order management systems including those of Bloomberg and Charles River Development.
Traders Magazine takes a look at how the exchange rebate system has (or hasn't) changed since 2011.
Has trade transparency improved in five years?
Salad days of 2009 were a long time ago.
Few firms were left in the business -- in 2005.
A look back at the 'first wave' of co-location.