Ticker Plants Deploy Multicast Messaging
Ticker plants are leveraging new messaging technology to provide extreme low-latency performance, enabling apps to publish data simultaneously to multiple subscriber apps.
NYSE Technologies, for example, has boosted its Data Fabric middleware with a new technology called Multiverb, which pushes the latest network hardware to the limits of its capabilities.
By allowing applications to bypass the operating system and access the Host Channel Adapter (i.e. network card) directly, Multiverb reduces CPU overhead and improves latency.
“By leveraging the inherent capabilities of the network to replicate messages in hardware, the use of multicast allows a one-to-many message delivery mechanism where a publisher application sends data once to multiple interested subscriber applications,” Brian Doherty, global product manager of enterprise software at NYSE Technologies, told Markets Media.
Legacy messaging systems rely on the operating system kernel and IP stack to publish and receive data.
“This approach dates back to the launch of the internet and there have been few enhancements especially considering how much more powerful Intel CPUs are and the extra bandwidth available with next generation networks.,” said Doherty. “The net result is a bottleneck which results in poor latency, non-deterministic latency, low throughput and challenges with distributing to multiple consumers.”
Data Fabric, a scalable, low latency middleware designed for high performance applications in the capital markets community, links distributed applications and simplifies application design, enabling clients to fully utilize next generation hardware and networks and eliminate bottlenecks in existing solutions, Doherty said.
“Data Fabric improves competitiveness by allowing clients to adapt quickly to constant changes in the markets, and reduce costs by keeping internal resources focused on their core business,” he said.
Multiverb enables direct memory access multicast distribution, allowing clients to distribute data across hundreds of applications at single-digit transport latencies.
“Previously, Data Fabric utilized remote direct memory access messaging, which is point-to-point,” said Doherty. “By using multicast, a single message can be distributed to thousands of consumers with no additional load on the source.”
During a test, Data Fabric Multiverb was deployed onto 360 Intel Westmere class servers with Mellanox C-X2 cards and a QDR Infiniband network. More than a million 200 byte messages per second were published to 1,000 clients with an average latency of 4.5 microseconds and a 99.99 percentile latency of 19 microseconds.