FPGAs Grow Beyond Low-Latency Roots
The field programmable gate array (FPGA), a favorite tool of high-frequency and latency-arbitrage traders, is growing up.
Over the past few years, improved performance and increased ease of programming FPGAs are moving the once niche technology into new roles in financial firms’ data centers.
Unlike central processing units that have seen their performance growth remain relatively flat over the past five years, Moore’s Law of doubling computing performance every generation is just starting to apply to FPGAs, according to Arnuad Derrase, founder and CEO of Enyx and a panel participant at the STAC Summit in Midtown Manhattan on Tuesday.
“If an FPGA chip runs at 250 MHz today, the next generation certainly will run at 500 MHz or more,” he explained. “That means you could lower your latency just by changing a chip.”
Derrase sees demands for FPGAs only increasing as various industries start to replace their 10 Gbps Ethernet networks with fatter 25 Gbps Ethernet pipes.
“Only FPGAs are capable of ingesting all of that data before sending it on to the CPU,” he added.
At the same time, programmers are finding it easier write code for FPGAs using high-level programming languages like Open Computing Language, which lets code run across CPUs, FPGAs, graphical processing units, and digital signal processors, than the obscure register transfer language historically used for programming FPGAs.
Such convenience, however, comes with a performance hit compared to programming to the bare metal, according to fellow panelist David Snowdon, founder and co-CTO at Metamako. “But this is nothing new in the programming world.”
In the meantime, developers are leveraging FPGAs’ small footprint and deterministic processing capabilities for other applications beyond strictly trading that may be difficult to implement solely as software.
Snowdon cites time stamping as a prime example. “It has to be deterministic, but it doesn’t have to be low latency,” he said. “Once a packet is time stamped, it must make its way through the rest of the packet-capture system and then on to an aggregation system.”
He is also aware of one unnamed consultant that was able to consolidate 20 FIX engines running on separate hardware down to a handful of FPGAs.
Even in the order execution space, traders and developers need get away from their fixation of capturing the last 50 to 100 nanoseconds of latency, added Olivier Baetz, the COO of NovaSparks.
“Yes, when arbitraging between to highly-correlated assets those strategies need to capture that level of latency,” he said. “But on the other end of the spectrum, there are strategies that trade thousands of instruments across markets and even asset classes. For those, you need a lot of PhD mathematics but latency is not as important. Then there is everything in between.”
Featured image by Altera Corporation/Wikimedia Commons under creative commons
The new offering sports 11 liquidity-seeking algorithms with more to come.
GMAG says vacation season is a good time for design, implementation and testing of machine-based trading.
A bank will be able to recommend specific changes to clients to improve their access to liquidity.
Tech vendor will support Canadian equities trading and interlisted securities trading via its AMS.
A target-close model allows traders to mitigate market impact, Deutsche Bank's Andrew Royal writes.