Cables, blinking LEDs and even tubes - GAMING PCs

How these gadgets help in building upcoming models of our investment strategies

Article by Christian Gloor & Bendri Batti

 

Visitors to the Tom Capital office notice a row of fancy-looking PCs placed prominently on a shelf. Through glass windows, the innards are visible: cables, blinking LEDs, and even tubes? It turns out these machines, not connected to any monitor or keyboard, are in fact high-end, custom-built, water-cooled gaming PCs. This article shows how these gadgets help in building upcoming models of our investment strategies.

As a systematic hedge fund, we rely heavily on data processing. We fetch economic, third party, historical and live market data automatically in our production system. Our algorithms combine them into a market forecast, which is reflected in the positions we hold. The daily processing of that data in the production system is straight forward and computationally easy. Our models even run continuously to display potential position changes based on the current market prices. 

The models are based on historical evidence. Obviously, we run a backtest for each new model to see how it would have performed in the past. We do that carefully to avoid over-fitting and forward-looking bias. That means we perform a walk-forward backtest, which uses point in time data only (data that was available back then). Still, economists remember which indicators showed signs of past drawdowns early, and tend to assume that they will do that again in future. That's why we let the machines do the indicator selection for us, as they can easily be told to forget the past. 

This requires us to train the potential new models on past data and to simulate various scenarios. Our researchers should be able to test new hypotheses quickly by the push of a button. This includes changes in the input data, the computational model, and the machine learning methodologies used. The more responsive the simulation system is, the faster they can share and discuss the results with their colleagues. The system must be easy to understand with little programming experience, have access to our historical data storage, and be able to draw from our library of production and research software building blocks. 

Equity curves from several variations of our fixed-income model

 

 

The typical classical solution is building a cluster of dedicated server machines hosted in a data center. Most providers offer a flexible package. They look after the machines, guarantee availability and backup of data. We built a test setup with one dedicated 40-core machine hosted at our existing data center (Intel Xeon CPU @2.50 GHz and 192 GB RAM). This server can be ordered and easily configured via the provider's web page. Also tested was a more modern approach with a cloud-based setup where machines are virtual instances and offer even more flexibility, as nodes can be dynamically created if demand increases. Such solutions exist from tech giants like Google and Amazon, but also smaller players start the enter the stage. As a third option, we bought an off-the-shelf gaming PC (Intel Core i7-8700 CPU @3.20 GHz). Optimized for the latest 3D games, these machines should be able to handle our simulations as well. They include a powerful graphics card (GPU), which is nice to have for accelerated machine learning. We installed the same software on each of the solutions and tested them for 2 months using daily business research activities. 

The advantages of a generic cloud solution as well as of the dedicated machine in an external data center is that we don't have to maintain the infrastructure ourselves and that they are reliably reachable even in case our office is offline. We are linked to our geospatially redundant data centers by gigabit fiber, so transferring data and results from and to the compute cluster would not be a problem. However, machines built for data centers are typically optimized for reliability, maintainability, and processing requests from hundreds of users simultaneously. The individual CPUs are relatively slow due to thermal and spacial constraints. We need quite the opposite: the results should be available as quickly as possible, while the rest does not matter too much. The cluster not being available for a few minutes or even losing intermediate data is not a problem, as long as it does not happen too often. Ironically, the in-house gaming PC, which is the least professional-looking option we tested, offered exactly this. 

The tests showed that only the gaming PC's response time was fast enough for interactive research work. What matters most is how long researchers have to stare at the monitor before they get back an answer. This is dictated by the peak CPU performance, which is available for small bursts of work. Also, even the slightest network delay is noticeable when you work remotely. So we decided to invest in gaming PCs.

We found a Swiss company (https://www.jouleperformance.ch/) which is specialized in building high-performance gaming PCs on request. The eye-catching photos of their work convinced us right away. The machines are built professionally and look fancy at the same time. Their competent team designed a machine that is optimal for our purpose. We went for mostly conservative choices such as mainboard and RAM. We do not need to overclock the Intel Core i9-9900K CPU, which is already super fast at 5.0 GHz peak clock frequency. Nor do we really need the ability to change the color of glowing cables by software. We kept the water cooling though, as we wanted the machines to be as silent as possible while being able to handle the heat generated by running our simulations.

The beauty of a well-assembled PC

Our research is fully browser-based. This allows us to connect to the research cluster from anywhere, desk or laptop, and experience the full computing speed. The setup is based on Python 3.7 and JupyterLab, which runs on a ubuntu 16.04 installation. The Jupyter notebooks have access to our internal SDK (modules that contain the most common building blocks of our strategies and trading models). The packages used most for research are NumPy, SciPy, Pandas, scikit-learn, TensorFlow, and Zapdos. Each researcher gets a dedicated machine assigned as head node but can access the resources of the other machines as well. The individual machines are linked together by a dedicated gigabit network. 

the compute cluster machines prominently

The compute cluster machines prominently on display

 

Being a hedge fund invested in liquid futures only, we are naturally afraid of any form of physical delivery. Would we sit on a long position of useless metal? Our concerns vanished quickly when the first Jupyter notebook was started. These machines are fast. Results from computing cells in the notebook come back instantaneously. Or as one researcher stated: "I can't press return fast enough to saturate the CPU!" Clearly, interactive research is super fast now. How about more complex parallel computations that require all CPU simultaneously? It turns out the cluster is up to that task, too. Under normal research load, the computers are virtually silent. All CPUs at 100% load, the temperature rises and the cooling system starts to spin up noticeably. But the system remains stable and computations that last a weekend are not an issue at all.

We initially planned to buy more machines when fully satisfied. However, to IT's disappointment, the four machines already exceeded our expectations and will be sufficient for the foreseeable future. Is a good compute cluster the secret for success? No. But it is one of the pillars supporting our research and operations. Getting them right is the essence of our engineering approach to investing.

Article by Christian Gloor and Bendri Batti

August 21  2019

Address:

Tom Capital AG

Othmarstrasse 8

8008 Zurich

Switzerland

Telephone:

+41 44 515 62 88

Do you want more information about Tom Capital?

© 2019 TOM CAPITAL AG | created by Tom Capital AG

  • logo-linkedin
  • social_xing
  • logo-facebook
  • logo-twitter
  • social_instagram
  • White YouTube Icon