Risk Management in trading systems

One of my first memories of learning market mechanics was a cover article of an issue of “Technical Analysis of Stocks and Commodities” that detailed some aspects of risk. I had never explored the subject, and was surprised by what was entailed. It was a typical instance of “you don’t know what you don’t know.”

I saved that issue of the magazine for many years. Unfortunately it was tossed somewhere along the way. But while reading that article I began to think about how software could help the retail trader. If a spreadsheet could take in the necessary information, the user could see where adjustments may need to be made to match their risk profile.

Thinking back at such a naive thought causes nostalgia. Those that know they need it have built it, those that do not know will either flame out or learn.

I recently purchased several audio books in the Jack Schwager series. I’ve read many of them, but it is nice to have them in audio form and listen passively. Certain nuggets stand out. In his hedge fund series of interviews, he goes back in history to the origins of the true “hedged fund”. From that it becomes clear that a risk management strategy is the core to generating alpha over the long haul. Studying additional materials on those first forays into “group investing” you see that it took a long time to catch on, but is a pillar of just about everything on Wall Street.

But how much of that trickles down into the typical trading systems I am asked to build? Unfortunately, very little. Most are concerned with concentrated “bets” on the short-term direction of a security. Getting the entry and exit right and following the rules is certainly a big part of the equation. But ignoring the “bigger picture” items such as risk management is often the cause of big losses.

So what can I, the lowly software guy do? While I can sometimes draw their attention to the issue at the outset, often it comes down to reporting. After running a back test, provide some statistical references to a broader index, or VAR, or what the market was doing while the simulation was running. Those are small numbers that open the door to bigger discussions on what markets best match a particular strategy.

This is a deep subject. Perhaps I should break it down into smaller sections by risk type to organize it better.

Please note: I am not pretending to be an expert here. I am simply writing down what I know for my own benefit. If you want to chime in, by all means, please do!

The CQG API and cqg_helper

Just as I have done for Interactive Brokers, I am also sometimes asked to build interfaces to CQG. While it is not as developed as Interactive Brokers, I have decided to offer what I have to the open source community. This repository will get you started on the path to communicating to CQG via their WebAPI interface using C++. Enjoy.

And please feel free to add issues and pull requests. Be part of the open source community!

The Interactive Brokers API and ib_helper

I often build trading systems for users of Interactive Brokers. Rather than build the common utilities over and over, I have developed a small “toolbox” that contains what I often need. I have decided to make this toolbox available to the open source community (MIT licensed), in the hopes that others may decide to contribute.

This library has two main purposes:

  • Provide simple interfaces for the simple things. This lessens the need to implement EWrapper functions that do not apply, but categorizes them for easy implementation.
  • Provide a testing framework. Google Test is used to build tests, and a “Mock” connector mimics IB TWS to allow for building unit and integration tests. With little effort, you could even build backtests of strategies.

Please check it out. Feel free to add issues and pull requests. Enjoy!

Attention Traders: the odds are against you…

… let me help.

Trading is hard, especially for the retail trader. To put it bluntly, large institutions will always have the advantage. While there are many mountains to overcome, I am attempting to help myself and others with some of the more technical hurdles that us retail traders confront.

One of the biggest challenges is speed of execution. Without some large changes in the industry, retail traders will never have the resources to connect directly to the data feeds and other systems necessary to complete a trade. While we can’t shave of the last few milliseconds, we can greatly reduce the tick-to-trade time by improving our front ends.

I have for years worked with Interactive Brokers as my broker. While all brokers have their warts, Interactive Brokers has provided me with relatively painless access to the markets. And while their Trader Workstation software is unwieldy, it is fairly complete. But partly due to all those features that were asked for by traders, the application is not as fast as it could be.

Part of the problem is that it is a Java application. It is simply not the right choice for a data heavy, performant application. It is obvious they have worked hard tuning it. To be honest I am surprised it is as fast as it is. But it is still too slow.

Rewriting Trader Workstation would be great. But who is going to do it? I would love to. Perhaps some decade I will. But that is simply too big of a project to bite off in one chunk. So instead, we will eat this elephant in smaller chunks.

I am completing a simple application I like to call “Sniper”. It follows me as I click around looking at different stocks and futures in Trader Workstation. When I want to enter an order, I can click around in Trader Workstation to build my order (yuck) , or I can hit a hotkey while Sniper is active and have the order placed immediately.

Now, you could configure hotkeys in Trader Workstation. Let me know when you figure out the intricacies of that. Instead of fighting the myriad of hotkey options in Trader Workstation, simply install Sniper. Sniper connects via the Interactive Brokers API. It provides hotkeys to place limit orders that can immediately be transmitted to Interactive Brokers for execution.

I am donating this small utility to entice you to talk to me about other software that will make your trades easier and faster.

Sniper will be available for Linux, MacOS and Windows. This post will be updated with more information as the product is released. The goal is to have version 1.0 released around April 15, 2023.

Almost bare metal Part II

In my previous post I mentioned I have been working on moving a routine that handles many hashing tasks at once to the GPU. The preliminary results are encouraging, although I cannot say they are apples-to-apples to the actual task at hand.

To get some simple numbers, I simplified the task. There is a stream of bytes that must be hashed using the SHA256 algorithm. The stream of bytes includes a part that is somewhat static and a part that changes. The part that changes could be one of about 13,000 combinations. The task is to hash those 13,000 byte streams and check which (if any) have the value below a certain threshold, similar to the process used by Bitcoin to publish blocks.

Because we are now dealing with moving blocks of memory across a bus, the actual process was adjusted to do this in more of a “batch” mode. The 13,000 byte streams are easily created on the CPU. Each byte stream is put in an array (also on the CPU).

Now for the fun part. The entire array is hashed one element at a time. The results are placed in another array. This is done once on the CPU and once on the GPU, with the timing results compared.

The initial results were not good:

Building CPU IDs: 3345344ns total, average : 261ns.
Building GPU IDs: 142561855ns total, average : 11137ns.

But running it a second time yielded:

Building CPU IDs: 3297128ns total, average : 257ns.
Building GPU IDs: 795204ns total, average : 62ns.

And a third run yielded:

Building CPU IDs: 3304968ns total, average : 258ns.
Building GPU IDs: 788838ns total, average : 61ns.

I would guess that the first run had to set up some things on the GPU that cost time. Subsequent runs are much more efficient. To support that thesis, note that the time it took the CPU was fairly consistent across all three runs, and the GPU runs after the initial run were also very consistent.

Runs beyond the first 3 show consistent times on both CPU and GPU.

The synopsis is: Bulk hashing on a NVidia GeForce 2060 (notebook version) using SYCL provides about a 4X improvement in hashing. Moving to CUDA provides similar results, so it seems the language does not seem to matter here.

More questions to be answered:

  • Comparing the hashes to a “difficulty” level must also be done. Would the GPU be more efficient here as well? Should it be done in the same process (memory efficient) or in a separate process (perhaps pipe efficient)?
  • What would happen if we ran this on an AMD GPU with similar specifications?
  • What resources were used on the GPU? Are we only using a fraction of memory/processing abilities, or are we almost maxed out?
  • At what point is the arrays that are passed in or retrieved from the card too big to handle?

(Almost) bare metal

I love writing in C++. The reasons are performance (I am a speed junkie) and learning opportunities. If necessary, I can tune a routine to seek out the fastest way to achieve the result. Each time I do, I learn a lot. I have been learning constantly since I picked up the book on BASIC that I received with my Timex-Sinclair 1500.

But what if my computer is simply not fast enough to perform the routine I need it to? If the task can be broken into smaller, somewhat independent pieces, we can use threading. My machine has 8 CPUs and 16 cores. My latest project included a process that took 20 seconds on 1 thread, and 6 seconds with 16 threads. And it wasn’t that hard to do.

But 6 seconds is not fast enough. And during those 6 seconds my CPUs are pegged at 100%. The routine is tuned to the best of my ability. What now? Well, the process is some number crunching. I have a video card that loves to crunch numbers. What will it take?

I have coded the routine in CUDA as well as SYCL and will have some results soon (Update: See Part II). I do not get to do this stuff often, so I am enjoying re-learning CUDA and this is my first round with SYCL. SYCL looks like a winner here, as I will be able to compare results between some NVIDIA and AMD cards with (hopefully) the same codebase.

But using GPUs are not the only option for such tasks. There are ASICs available for my current number-crunching necessity. And as I walked down the ASIC road I was able to explore another avenue that I had only glossed over in some prior projects: FPGAs.

ASICS are purpose-built microprocessors. They are designed to be fed a certain way, and spit out the answer in a certain way. You can think of it as getting a new gadget that has the sticker “no user serviceable parts inside”. You cannot get that chip to do anything beyond what it was meant to do.

FPGAs are in the middle between microprocessors and ASICs. In fact, you can use an FPGA as a way to prototype an ASIC. But here, we are not talking making it do what we want by manipulating C or C++. We are talking about manipulating transistors and wires. It is digital circuit building without the breadboard and soldering iron (well, at least in some cases).

FPGAs have been around for a while. But until fairly recently they were simply too expensive for anything but the big budget projects. Think government, Wall Street, Google, and Amazon. Today, you can experiment with an FPGA for $40 or less.

As a software developer, I had to twist my brain to begin to reason about how get these things to do what I want. I have done some microprocessor work. I have written software for embedded devices. I know what a transistor is and when to use a NAND. But FPGAs are requiring me to rewire my brain as I rewire that chip.

Where To Begin

I am just getting started. So here is what I have learned so far. The big hardware players in this arena are Xilinx (now owned by AMD), Altera (now owned by Intel) and Lattice. There are other players, but for what I am doing and where I am at in my exploration, I want to examine these suppliers before getting into the nuanced specifics. I am at the “I don’t know what I don’t know” stage.

When digging through the interwebs, there are a number of development boards and chips that keep popping up. So if you want to follow the tutorials, you will need to get your hands on some hardware. My first board will not be the latest and greatest (one I looked at on Mouser was just over USD$17,000.00). It will be modest and with a manufacturer that seems to often be used in the industry I am working with. Here are the 2 I am looking at:

The Digilent Basys 3

This is often used in FPGA tutorials. The development board includes a good amount of toys to play with to build your knowledge. Pushbuttons, LEDs, Ethernet port, etc. for around US$160

The Digilent Arty A7/S7

This also appears in some tutorials. The S7 seems to be the low-cost version of the A7. Both come in two flavors, so 4 different chips for 4 different prices. The A7 includes an Ethernet port, whereas the S7 does not. The A7 with the lower cost chip is around $160

Honorable Mentions

If you are looking to get into FPGA on the cheap, check out the Lattice Icestick. But don’t stop there. There are plenty of others with varying sizes of “maker” communities around them.

Debugging in VI

I switched back from Visual Studio Code to vi (I tire of switching from keyboard to mouse). As my memory is not very good, here is my cheat-sheet for using :Termdebug

Starting a debug session: Termdebug [executable_name]

Navigating the 3 windows: CTRL + w and then j or k.

Set / remove a breakpoint: :Break :Clear

Step / Next: s or n

Run / Continue: run or c

Pros and Cons of Centralization

One of the subjects that many in the crypto universe speak of is centralization. Most speak of it as something that must be destroyed. But it does provide services. Here are my thoughts:

Pros of Centralization

I have several bank and investment accounts. The majority of them are insured by either FDIC or SIPC. While the US government does not explicitly say they back these insurance institutions, these are (IMO) truly “too big to fail”.

That means that the cash I have deposited is “mostly” secure against the bank failing. The (very) small percentage chance that the FDIC or SIPC fails is small enough that I do not worry about it. Should they fail, the world will have bigger problems than money.

Another pro is security. We rely on the fact that banks know how to secure the deposited funds. They take on the risk of the funds I deposit. The bank will fail if the risks they take fail. So they put guards in place to prevent their own failure. They are basically in the business of protecting my money so they can make money with it.

You can say “that is not fair. They are charging a much higher percentage interest than what they are paying me.” True. But if you think you can do better, check out the real returns of some of the P2P lending platforms out there. I believe you will find that the due diligence of a risk-averse lender can easily be a full-time job.

To me, the biggest benefit is the lack of threat of robbery. If I kept all my money in my home, someone is eventually going to know about it. That opens me up to a physical threat of someone taking it. I realize someone can always attempt a robbery, regardless of if they know I store money there. But should they learn that I keep all my money there greatly increases the risk of me being robbed.

Also, contrary to the belief of many, the central bankers (a.k.a “The Fed” in the US) do perform a vital service. They have powerful tools at their disposal to manipulate the market. Without those tools, the US economy would have completely failed many times over. So, preventing crashes is a good thing, right? Well…

Cons of centralization

I am not advocating that crypto is worthless because the banks perform a service. I believe the technology is great. But there are some trends in crypto that remove the advantages of banks because they too are becoming more centralized. More on that a little later.

In the secular world, the “golden rule” changes to “he who hath the gold makes the rules.” And for societies that run on some form of money (in other words, practically all society), controlling the money supply puts power into the hands of few.

Throughout history, centralization of power has always lead to problems (the lack of centralization has also lead to problems, but I digress). Having money is an easy, powerful, and (often) legal form to control others. Centralization is not always bad, but it easily leads to abuse of power.

So we go back to The Fed. When they push buttons and pull levers to attempt to steer the US economy, their effects ripple through the world economy. Those bankers are not concerned about any economy other than the US economy. The US government is (arguably) concerned with world stability, but not The Fed. Remember, the decisions of The Fed are (arguably) not controlled by politicians. So while they may serve a purpose for the US, they affect the world. And the majority of the world is not their concern.

Concluding thoughts

I have many more thoughts on the matter. But the above forms the crux of what is right and wrong with crypto IMO.

Providing the security without the centralization is a near impossible task. When you have something someone wants, you risk having it taken by force. If it is a government you worry about, a decentralized cryptocurrency may be the answer. Just remember that at that point, you are now responsible for all the services that banks provide.

Do you want to get involved? Great! Do you not know where to start? Be careful! Remember, you are now your own bank. You need to protect your own money. Start small, learn carefully. Things will continue to move fast. But that does not mean you are forced to take unnecessary risks.

Uses for Komodo’s komodostate file

The komodostate file contains “events” that happened on the network. The file provides a quick way for the daemon to get back to where it was after a restart.

Some of the record types are for features that are no longer used. In this document I hope to detail the different record types and which functions use this data.

The komodo_state object

The komodo_state struct holds much of the “current state” of the chain. Things like height, notary checkpoints, etc. This object is built via the komodostate file, and then kept up-to-date by incoming blocks. Note that the komodo_state struct is not involved in writing the komodostate file. It does store its contents while the daemon is running.

Within the komodo_state struct are two collections that we are interested in. A vector of events, and a vector of notarized checkpoints.

komodo::event

komodo::event is the base class for all record types within the komodostate file.

event_rewind – if the height of the KMD chain goes backwards, an event_rewind object is added to the event collection. The following call walks backwards through the events, and when an event_kmdheight is found, it adjusts the komodo_state.SAVEDHEIGHT. That same routine also removes items from the events collection. So an event_rewind is added, and then immediately removed in the next function call (komodo_event_rewind).

event_notarized – when a notarization event happens on the asset chain of this daemon, this event is added to the collection. The subsequent method call does some checks to verify validity, and then updates the collection of notarization points called NPOINTS.

event_pubkeys – for chains that have KOMODO_EXTERNAL_NOTARIES turned on, this event modifies the active collection of valid notaries.

event_u – no longer used, unsure what it did.

event_kmdheight – Added to the collection when the height of the KMD chain advances or regresses. This eventually modifies values in the komodo_state struct that keeps track of the current height.

event_opreturn – Added to the collection when a transaction with an OP_RETURN is added to the chain. NOTE: While certain things happen when this event is fired, these are not stored in the komodostate file. So no “state” is updated relating to opreturn events on daemon restart. The komodo_opreturn function shows these are used to control KV & PAX Issuance/Withdrawal functionality.

event_pricefeed – This was part of the PAX project, which is no longer in use.

NPOINTS

NPOINTS is another collection within komodo_state. These are notary checkpoints. This prevents chain reorgs beyond the previous notarization.

The collection is built from the komodostate file, and then maintained by incoming blocks.

Unlike much of the events mentioned earlier, several pieces of functionality search through this collection to find notarization data (i.e. which notarization “notarized” this block).

komodostate corruption

An error in version 0.7.1 of Komodo caused new records written to the komodostate file to be written incorrectly. Daemons that are restarted will only have the internal events and NPOINTS collections up to the point that the corruption starts. The impact is somewhat mitigated by the following:

New entries may be corrupted in the file, but are fine in memory. A long-running daemon will not notice the corruption until restart.

Most functionality does not need the events collection beyond the last few entries, which will be fine after the daemon runs for a time.

Data queried for within the NPOINTS collection will have a gap in history. As the daemon runs, the chances of needing data within that gap are reduced.

Reindexing Komodo ( -reindex ) will recreate the komodostate file based on the data within the blockchain. The fix for the corruption bug will hopefully be released very soon (in code review).

Komodo Notarization OP_RETURN parsing

An issue recently popped up that required a review of notarization OP_RETURNs between chains. As I hadn’t gone through that code as yet, it was a good time to jump in.

NOTE: Consider this entry a DRAFT, as there are a few areas I am still researching.

Preamble

For notarizations, the OP_RETURN will be a 0-value transaction in vOut position 1. vOut position 0 will be the transaction that contains the value.

The scriptSig of the vOut[1] will start with 6e, which is the OP_RETURN opcode. Immediately after will be the size, which will be a value up to 75, OP_PUSHDATA1 followed by a 1 byte size for values over 75, or OP_PUSHDATA2 followed by 2 bytes of size for numbers that do not fit in 1 byte. And then the “real” data begins.

Block / Transaction / height

The next 32 bytes is a block hash, followed by 4 bytes of the block height. This should correspond to an existing block at a specific height on the chain that wishes to be notarized.

If the chain that wishes to be notarized is the chain that we are currently the daemon of, the next 32 bytes will be the transaction on the foreign “parent” chain that has the OP_RETURN verifying this notarization. <– Note to self: Verify this!

Next comes the coin name. This will be ASCII characters up to 65 bytes in length, and terminated with a NULL (ASCII 0x00). Note that this is what determines if the transaction hash is included. So we actually have to look ahead, attempt to get the coin name, and if it doesn’t match our current chain, look for the coin name 32 bytes prior.

Sidebar: What are the chances that the same sequence of bytes coincidentally end up in the wrong spot so we get a “false positive” of our own chain? Answer: mathematically possible, but beyond the realm of possibility.

MoM and MoM depth

The next set of data is the MoM hash and its depth. This is 32 bytes followed by 4 bytes, and corresponds to the data in the chain that wishes to be notarized.

Note that the depth is the 4 byte value filtered with “& 0xffff”. I am not sure why just yet.

CCid Data

Finally, we get to any CCid data that the asset chain that wishes to be notarized includes. The KMD chain will never include this when notarizing its chain to the foreign “parent” chain (currently LTC).

CCid data contains a starting index (4 bytes), ending index (4 bytes), an MoMoM hash (32 bytes), depth (4 bytes), and the quantity of ccdata pairs to follow (4 bytes).

Afterward is the ccdata pairs. which are the height (4 bytes), and the offset (4 bytes).

Additional Processing

If all of that works, and we are processing our own notarization, the komdo_state struct and the komodostate file is updated with the latest notarization data.

Resources:

To follow along, look for the komodo_voutupdate function within komodo.cpp.