I began a personal project a long time ago to provide connectivity to various cryptocurrency exchanges. I never got around to finishing it. It did however stay in my mind. And while working on other projects, I gathered knowledge of better ways to implement different pieces.
Recently I began anew. The market has changed a bit, but the basics still hold true. There is a need for fast connectivity to multiple exchanges. Open-source “one size fits all” solutions that I researched did not work for me. They did not focus on speed and efficiency.
Centralized exchanges are still the norm, although distributed exchanges (i.e. Uniswap) are making headway. While I believe distributed exchanges will be tough to beat in the long term, currently their centralized cousins provide the “on ramps” for the majority of the casual users. They also suffer from a dependence on their underlying host (i.e. Ethereum in the case of Uniswap). These are not impossible hurdles to overcome, but for now centralized exchanges play a large role in the industry.
Regardless of the centralized/decentralized issue, there are multiple exchanges. Those with connectivity to many of them can seek out the best deals, as well as take advantage of arbitrage opportunities. But this is a crowded field. Getting your order in before someone else becomes more important as bigger players squeeze the once juicy profits of the early adopters.
A library that knows how to keep a connection to a variety of exchanges and provides a platform to develop latency-sensitive trading algorithms is the problem that the Crypto Currency Exchange Connector is attempting to solve.
The library is being written in C++. It will be modular to support features such as persistence, risk management systems, and the FIX protocols. But the bare-bones system will get the information from the exchange and present it to the strategy as quickly as possible. It will also take orders generated by the strategy and pass it to the exchange as quickly as possible. A good amount of effort has been put forth to make those two things performant.
Giving strategies the information needed to make quick decisions was the focus. Keeping strategies from the need of “boiler plate” code was a secondary goal that I feel I accomplished.
Compiling in only the pieces necessary was another focus. I debated for a good bit about making each exchange a network service you can subscribe to. That makes sense for a number of reasons, but it is an extra latency that I did not want to pay for. And having everything compiled into one binary does not mean it is impossible to break it out later. I wrote it with the idea that off-loading exchanges to other processes or servers would be possible, but having it all-in-one is the default.
Persistence is another issue. There is a ton of data pouring out of the exchanges. Persisting it is possible. But having it in a format for research often requires dumps, massages, reformatting, and sucking it into your backtesting tool. To alleviate some of those steps, I chose TimescaleDB. It is based on Postgres, and seems to work very well. Dividing tick data into chunks of time can be done at export time. I have yet to connect it to R or anything like that, but I am excited to try. That will come later.
I truly do not want this project to be set aside yet again. I have a strong desire to get this across the finish line. And it is very close to functional. There are only a few pieces left to write before my first strategy can go live.
I will not be making this project open source. It is too valuable to me, and gives me an edge over my competitors. At best I will take the knowledge I’ve gained and apply it to other systems that I am asked to build.