02.09.2012

Open-Source Software Addresses Big Data

02.09.2012
Terry Flanagan

Cassandra and Hadoop promise orders of magnitude gains over relational databases.

Capital markets firms are experimenting with open-source database technology capable of capturing, storing, and analyzing enormous amounts of data.

Open-source data storage systems such as Hadoop and Cassandra are ideal for capital markets apps because they can process, store and trigger actions based on a high-volume real-time event stream, perform analytics on historical data, and update models directly into the application.

“A number of our customers are running projects to evaluate and test new tools such as Hadoop and Cassandra,” Roji Oommen, senior director, business development for financial services at Savvis, told Markets Media.

The explosion of Big Data has affected all industries, but the capital markets has its own unique set of issues, such as the need to capture time-series data and merge it with real-time event processing systems.

“As electronic trading becomes pervasive, and you’re collecting full depth tick data feeds, it’s a staggering amount of data,” said Oommen. “The data management issues associated with storing and transforming information are complex.”
Cassandra is an open source distributed database management system designed to store and allow very low-latency access to large amounts of data.

The Cassandra data model is designed for distributed data on a very large scale.

In a relational database, data is stored in tables and the tables comprising an application are typically related to each other.
Cassandra, is a column-oriented database, meaning that it stores its content by column rather than by row. This has advantages for heavy-duty-number crunching apps that involve complex queries.

“Columnar databases such are faster for processing time-series data than relational databases,” said Oommen. “Cassandra is an open-source columnar database, and firms are testing its applicability to tick data management.”

Hadoop is an open-source framework that allows for distributed processing of large data sets across clusters of computers. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

“Hadoop is a distributed computing framework developed by Yahoo,” said Oommen. “Hadoop distributes data and workload to commodity services and can scale arbitrarily large, up to exobytes.”

Related articles

  1. MiFID II to Boost Automation

    As settlement accelerates, firms are looking closely at their post-trade processes.

  2. FINRA has begun disseminating individual transactions in active U.S. Treasuries at the end of the day.

  3. The acquisition enhances SIX's data offering and expands its global fixed income footprint.

  4. The partnership accelerates the time-to-market for the delivery of customized solutions.

  5. SEC’s Climate Disclosure Rule and EU's CSRD are boosting spending on corporate ESG reporting software.