Open-Source Software Addresses Big Data

Terry Flanagan

Cassandra and Hadoop promise orders of magnitude gains over relational databases.

Capital markets firms are experimenting with open-source database technology capable of capturing, storing, and analyzing enormous amounts of data.

Open-source data storage systems such as Hadoop and Cassandra are ideal for capital markets apps because they can process, store and trigger actions based on a high-volume real-time event stream, perform analytics on historical data, and update models directly into the application.

“A number of our customers are running projects to evaluate and test new tools such as Hadoop and Cassandra,” Roji Oommen, senior director, business development for financial services at Savvis, told Markets Media.

The explosion of Big Data has affected all industries, but the capital markets has its own unique set of issues, such as the need to capture time-series data and merge it with real-time event processing systems.

“As electronic trading becomes pervasive, and you’re collecting full depth tick data feeds, it’s a staggering amount of data,” said Oommen. “The data management issues associated with storing and transforming information are complex.”
Cassandra is an open source distributed database management system designed to store and allow very low-latency access to large amounts of data.

The Cassandra data model is designed for distributed data on a very large scale.

In a relational database, data is stored in tables and the tables comprising an application are typically related to each other.
Cassandra, is a column-oriented database, meaning that it stores its content by column rather than by row. This has advantages for heavy-duty-number crunching apps that involve complex queries.

“Columnar databases such are faster for processing time-series data than relational databases,” said Oommen. “Cassandra is an open-source columnar database, and firms are testing its applicability to tick data management.”

Hadoop is an open-source framework that allows for distributed processing of large data sets across clusters of computers. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

“Hadoop is a distributed computing framework developed by Yahoo,” said Oommen. “Hadoop distributes data and workload to commodity services and can scale arbitrarily large, up to exobytes.”

Related articles

  1. Exchange group reported its best second quarter, following a record first quarter.

  2. Institutions can manage their bitcoin exposures in existing portfolio management and trading workflows.

  3. Enabling a straight through process reduces operational complexity.

  4. The new zero-footprint datafeed removes the need for dedicated market data infrastructure and equipment.

  5. A robust sector taxonomy helps recognize similarities between digital assets which lead to correlated returns.