12.01.2015
By Rob Daly Editor-at-Large

FPGAs Grow Beyond Low-Latency Roots

12.01.2015 By Rob Daly Editor-at-Large

The field programmable gate array (FPGA), a favorite tool of high-frequency and latency-arbitrage traders, is growing up.

Over the past few years, improved performance and increased ease of programming FPGAs are moving the once niche technology into new roles in financial firms’ data centers.

Unlike central processing units that have seen their performance growth remain relatively flat over the past five years, Moore’s Law of doubling computing performance every generation is just starting to apply to FPGAs, according to Arnuad Derrase, founder and CEO of Enyx and a panel participant at the STAC Summit in Midtown Manhattan on Tuesday.

“If an FPGA chip runs at 250 MHz today, the next generation certainly will run at 500 MHz or more,” he explained. “That means you could lower your latency just by changing a chip.”

Derrase sees demands for FPGAs only increasing as various industries start to replace their 10 Gbps Ethernet networks with fatter 25 Gbps Ethernet pipes.

“Only FPGAs are capable of ingesting all of that data before sending it on to the CPU,” he added.

At the same time, programmers are finding it easier write code for FPGAs using high-level programming languages like Open Computing Language, which lets code run across CPUs, FPGAs, graphical processing units, and digital signal processors, than the obscure register transfer language historically used for programming FPGAs.

Such convenience, however, comes with a performance hit compared to programming to the bare metal, according to fellow panelist David Snowdon, founder and co-CTO at Metamako. “But this is nothing new in the programming world.”

In the meantime, developers are leveraging FPGAs’ small footprint and deterministic processing capabilities for other applications beyond strictly trading that may be difficult to implement solely as software.

Snowdon cites time stamping as a prime example. “It has to be deterministic, but it doesn’t have to be low latency,” he said. “Once a packet is time stamped, it must make its way through the rest of the packet-capture system and then on to an aggregation system.”

He is also aware of one unnamed consultant that was able to consolidate 20 FIX engines running on separate hardware down to a handful of FPGAs.

Even in the order execution space, traders and developers need get away from their fixation of capturing the last 50 to 100 nanoseconds of latency, added Olivier Baetz, the COO of NovaSparks.

“Yes, when arbitraging between to highly-correlated assets those strategies need to capture that level of latency,” he said. “But on the other end of the spectrum, there are strategies that trade thousands of instruments across markets and even asset classes. For those, you need a lot of PhD mathematics but latency is not as important. Then there is everything in between.”

Featured image by Altera Corporation/Wikimedia Commons under creative commons

Related articles

  1. Allowing algorithms to control the supply and demand of crypto was challenged by Terra Luna.

  2. CME Boosts European Energy Business

    Infrastructure and sustainability will be one of the biggest opportunities in alternative investments.

  3. The broker's algo development team gains access to high quality, granular Level 3 data.

  4. Trading Europe From ‘Across the Pond’

    Bank aims to gain share from its systematic approach to trading, market structure proficiency and modern techn...

  5. The firm will be rolling out algos for institutions to navigate crypto market structure.