Focus on Reference Data
Amid pressure from regulators and clients, capital and commodity-market participants move toward more clarity of counterparty
Capital markets firms face thorny challenges in managing reference data, a catch-all term denoting the counterparty names and identifiers that are needed to monitor a firm’s risk exposure.
“It’s very difficult to properly define a strategy, at least in the case of the front office,” said Leonid Frants, president of OneMarketData, which provides tick-data, analytics, and complex-event-processing products. “One needs to understand a lot of different businesses to do that, and such knowledge is currently not centralized in any one business unit, not to mention IT groups responsible for building data solutions.”
Reference-data management is fundamentally about coalescing information that may lie scattered about an enterprise into a logical whole. Inconsistent, incomplete or inaccurate reference data can undermine straight-through processing, or the optimization of transaction speed.
“A large portion of a trade record is composed of reference data, and a significant amount of transactions breaks are caused by poor-quality reference data,” said Joseph Santangelo, principal consultant at Axis Technology, an IT consultancy for data management and security. “The costs to repair trades and correct mismatches are significant, and increase as errors pass through front- to middle- to back-office systems.”
Managing reference data efficiently and cost-effectively has become especially vital for trading and investment firms amid much-faster trading speeds and the drive for increased transparency in the wake of the 2008-2009 financial crisis. Put simply, market participants need to be confident they will know their exposures if a big problem arises.
“Banks are engaged in a never-ending quest to improve the accuracy and timeliness of their risk-management function,” said Brian Okupski, managing director and product head of reference data at Markit. “That’s why they’re embracing industry-wide projects around counterparty data, such as legal entity identifiers.”
Markit has created reference databases for credit derivatives and syndicated loans, and is extending that to other asset classes. “We are expanding our capabilities across the spectrum of instrument and entity reference data,” said Okupski. “We are now working to support the entire fixed-income marketplace, not just reference obligations of traded entities, but the entire capital structure of that issuer.”
Markit’s Markit Entity Identifiers (MEIs) are designed to ensure that information transmitted is linked to the correct syndicated loan and delivered to the right market participant. “The primary objective of MEIs is to provide global coverage of entities in the loan market and their relationships,” said Okupski.
Markit offers a system for the identification of credit-derivative long legal names based on its Reference Entity Database (RED) service, which addresses legal and operational risks in the credit markets, including mismatched trades and incorrect internal aggregation of credit risk. “RED is a reference database for CDS, meaning that it provides a standard entity identifier for entities on CDS,” said Ed Chidsey, managing director and group head of data services at Markit.
An extensive legal verification process of company documentation from local jurisdictions confirms a reference entity’s long legal name and its relationship in a pair to a reference obligation.
“Reference data is about tracking the relationships among entities, issuers, and counterparties,” said Chidsey. “It gets more complicated when you’re dealing with derivatives, because a CDS trade references both an entity and a bond, or reference obligation.”
Two sets of unique alphanumeric codes — ‘CUSIP-linked Entity MIP codes’ — are created in RED: one to identify the entity’s long legal name and one to establish the pair. The database currently contains 12,000 reference entities traded in the credit derivatives market, covering 6,000 reference obligations.
Regulatory reform, competitive pressures and the need to optimize data across the enterprise are pushing custodians, service providers, consultants and securities firms to refine their reference-data management strategies.
“Reference data is one of the most active remediation programs in capital markets today,” said Fred Cohen, who leads the banking and financial services group at technology and outsourcing firm iGATE. “We have clients that operate multiple reference data environments. The risk is that they are both maintaining reference data across multiple silos, and when they try to aggregate their holdings, the pricings and components of these securities are different.”
According to a 2011 iGATE survey of financial executives, the key drivers for reference-data management are risk reduction, data quality and operating efficiency. Notably, 78% of respondents said different parts of their organizations were buying the same data for different target systems, and 85% of respondents did not have am intra-firm map of reference-data usage.
“In an ideal world, you would analyze each consuming system’s data requirements by attribute and have an enterprise-wide plan,” said Cohen. “The firm would then cleanse that data element one time and subsequently distribute it to all systems that required it.”
Data management and reporting can be static or non-static, with reference data falling under the static umbrella. Reference data can be likened to a car’s vehicle identification number (VIN), while market data is more like the reading on the speedometer.
“Reference data differs from market data in the sense that it’s relatively static,” said Jeremy Eckenroth, director of program management at technology provider Sapient Global Markets. “The way that you store time-series data is different than the way you store static data.”
The reference-data label encompasses security master, and client or counterparty masters. These two datasets are major targets for reference data management, given the breadth of applications that depend on them. “The ‘static’ label is a misnomer because these core data sources are in reality very dynamic,” Cohen said.
Counterparty exposure is a vital consideration when entering into derivatives transactions, yet neither regulators nor market participants can really measure it.
“The first wave of derivatives targeted under the Dodd-Frank Act was credit derivatives, but regulations need to include all types of derivatives,” said Else Braathen, domain manager for risk management at SimCorp, which provides investment-management systems. “Reference data is the glue that ties all these things together.”
Although Dodd-Frank has not specifically identified all types of counterparty exposures, market participants say it’s inevitable that firms will be mandated to account for them.
“Dodd-Frank is an obligation, but more importantly, an opportunity for firms to comprehensively address exposures across the enterprise,” said Braathen. “Dodd-Frank should be a catalyst for change, prompting a move from risk management to risk ownership.”
One vexing aspect to the task of reference-data management is a lack of standardization, as different trading venues, asset classes, and technological systems may not be in sync, and regulatory standards may differ by regulator. “Data tends to be scattered across different systems, and with different formats (and) naming conventions,” said Alberto Corvo, managing principal of financial services at outsourcng firm eClerx. “Maintaining multiple hierarchies can have a detrimental effect on data quality.”
The key is to have a coherent strategy and optimization of the approach to the implementation, maintenance and distribution of data across the institution, suggested Steve Engdahl, senior vice president of product management at GoldenSource, which provides reference and market-data management products.
“Multiple hierarchies are a necessity but mismanaging them introduces big reconciliation problems throughout the organization,” said Engdahl. “This is another area where a central management framework can simplify operations, by supporting multiple hierarchies while retaining all the internal cross-referencing between them.”
Data extraction and integration is the second stage of a digitization process.
Financial Instrument Global Identifier enables consistency through trade lifecycle and across institutions.
Spending on ESG data has an annual growth rate of 20%.
Increased electronification has created useable and accessible real-time and historic trade data.
The GIPS standards, created by CFA Institute, are for calculating and presenting investment performance.