Komodo Hardfork History

This post is mainly for me to keep track of which hardforks did what.

The main hardforks in Komodo are for the election of notaries. But there are other purposes. I will add to the list as I learn more. Eventually these should be documented within the code (comments near the declaration of a #define or something).

Note that the KMD chain hardforks are normally based on chain height. Asset chains are normally based on time. Hence season hardforks have both.

  • nStakedDecemberHardforkTimestamp – December 2019
    • Modifies block header to include segid if it is a staked chain (chain.h)
    • Many areas use komodo_newStakerActive(), which uses this hardfork
  • nDecemberHardforkHeight – December 2019
    • Many areas use komodo_hardfork_active() which uses this hardfork and the above
    • Disable the nExtraNonce in the miner (miner.cpp)
    • Add merkle root check in CheckBlock() for notaries (main.cpp)
  • nS4Timestamp / nS4HardforkHeight – 2020-06-14 Season 4
    • Only for notary list
  • nS5Timestamp 2021-06-14 Season 5
    • ExtractDestination fix (see komodo_is_vSolutionsFixActive() in komodo_utils.cpp)
    • Notary list updated
  • nS5HardforkHeight 2021-06-14 Season 5
    • Add merkle root check in CheckBlock() for everyone (main.cpp)

Many areas have hardfork changes without much detail. Here are some heights found by searching the code base for “height >”:

  • 792000 does some finagling with notaries on asett chains in komodo_checkPOW (komodo_bitcoind.cpp). See also CheckProofOfWork() in pow.cpp and komodo_is_special() in komodo_bitcoind.cpp.
  • 186233 komodo_eligiblenotary() (komodo_bitcoind.cpp)
  • 82000 komodo_is_special() (komodo_bitcoind.cpp)
  • 792000 komdo_is_special() and komodo_checkPOW()
  • 807000 komodo_is_special()
  • 34000 komodo_is_special()
  • limit is set to different values based on being under 79693 or 82000 in komodo_is_special()
  • 225000 komodo_is_special()
  • 246748 komodo_validate_interest() starts to work (komodo_bitcoind.cpp)
  • 225000 komodo_commission() (komodo_bitcoind.cpp)
  • 10 komodo_adaptivepow_target() (komodo_bitcoind.cpp)
  • 100 (KOMODO_EARLYTXID_HEIGHT) komodo_commission() (komodo_bitcoind.cpp)
  • 2 PoS check in komodo_checkPOW() and komodo_check_deposit()
  • 100 komodo_checkPOW() (komodo_bitcoind.cpp)
  • 236000 komodo_gateway_deposits() (komodo_gateway.cpp)
  • 1 komodo_check_deposit() (komodo_gateway.cpp) (a few places)
  • 800000 komodo_check_deposit() (komodo_gateway.cpp)
  • 814000 (KOMODO_NOTARIES_HEIGHT1) fee stealing check in komodo_check_deposit() as well as another check a little below.
  • 1000000 komodo_check_deposit() (2 places in that method)
  • 195000 komodo_operturn() (komodo_gateway.cpp)
  • 225000 komodo_opreturn() (komodo_gateway.cpp)
  • 238000 komodo_operturn()
  • 214700 komodo_opreturn()

I need to continue searching for “height >” in komodo_interest.cpp, komodo_kv.cpp, komodo_notary.cpp, komodo_nSPV_superlite.h, komodo_nSPV_wallet.h, komodo_pax.cpp, komodo.cpp, main.cpp, metrics.cpp, miner.cpp, net.cpp, pow.cpp, rogue_rpc.cpp, cc/soduko.cpp as well as more searches like “height <” and “height =” to catch more.

Komodo and Notaries in Testnet

I recently deployed an unofficial testnet for Komodo. This will allow me to perform system tests of notary functionality without affecting the true Komodo chain. The idea is to change as little code as possible, test the notary functionality from start to finish, and gain knowledge of the code base and intricacies of notarizations within Komodo.

The plan is to set up a notary node, wire it to Litecoin’s test chain, and do actual notarizations that can be verified on both chains. The first step will focus on the Komodo-Litecoin interaction. Later I will look at how asset chains use the Komodo chain to notarize their chain.

If you wish to follow along, the majority of the changes are in this PR.

A write-up of some of the technical details of becoming a notary can be found here.

Details to test / learn:

  • Notary pay
  • Difficulty reduction
  • Irreversibility
  • Checks and balances (how does a notary know which fork to notarize?)

Unit tests will reside in my jmj_testutils_notary branch for now.

NodeContext in Bitcoin Core

<disclaimer> These notes are my understanding by looking at code. I may be wrong. Do not take my word for it, verify it yourself. Much of what is below is my opinion. As sometimes is the case, following good practices the wrong way can make a project worse. That is not what I’m talking about here.</disclaimer>

Recent versions (0.20ish) of Bitcoin Core attempt to reorganize logic into objects with somewhat related functionality. In my opinion, this is one of the best ways to reduce complexity. External components communicate with the object with well defined interface. This can lead to decoupling, which makes unintended side effects less likely.

I am focusing on the NodeContext object, as I believe this would be a big benefit for the Komodo core code. I am using this space for some notes on how Bitcoin Core broke up the object model. I hope this post will help remove some notes scribbled on paper on my desk. And who knows, it may be useful for someone else.

Breakdown of NodeContext

The NodeContext object acts as a container for many of the components needed by the bitcoin daemon. The majority of the objects are std::unique_ptr. This is a good choice, as copies are explicit. Below are the components within NodeContext:

  • addrman – Address Manager, keeps track of peer addresses
  • conman – Connection Manager – handles connections, banning, whitelists
  • mempool – Transaction mempool – transactions that as yet are not part of a block
  • fee_estimator
  • peerman – Peer manager, includes a task scheduler and processes messages
  • chainman – Chain Manager, manages chain state. References ibd, snapshot, and active (which is either the ibd or the snapshot)
  • banman – Ban manager
  • args – Configuration arguments
  • chain – The actual chain. This handles blocks, and has a reference to the mempool.
  • chain_clients – clients (i.e. a wallet) that need notification when the state of the chain changes.
  • wallet_client – A special client that can create wallet addresses
  • scheduler – helps handles tasks that should be delayed
  • rpc_interruption_point – A lambda that handles any incoming RPC calls while the daemon is in the process of shutting down.

One of the largest refactoring challenges in Komodo is the large number of global variables. Pushing them into their respective objects (especially chainmain and args objects) will help get rid of many of those globals. This will help compartmentalize code and make unit tests easier.

Komodo Core and Bitcoin v0.20.0 refactoring

Note: The following are my thoughts as a developer of Komodo Core. This is not a guarantee the work will be done in this manner, or even be done at all. This is me “typing out loud”.

The Komodo Core code has diverged quite a bit from the Bitcoin codebase. Of course, Komodo is not a clone. There is quite a bit of functionality added to Komodo which requires the code to be different.

However, there are areas where Komodo could benefit from merging in the changes that Bitcoin has since made to their code base. Some of the biggest benefits are:

  • Modularization – Functionality within the code base is now more modular. Interfaces are better defined, and some of the ties between components have been eliminated.
  • Reduction in global state – Global state makes certain development tasks difficult. Modularization, testing, and general maintainability are increased when state is pushed down into the components that use them instead of exposed for application-wide modification.
  • Testability – When large processes are broken into smaller functions, testing individual circumstances (i.e. edge cases) becomes less cumbersome, and sometimes trivial.
  • Maintainability – With improvements in the points above, modifications to the code base are often more limited in scope and easier to test. This improves developer productivity and code base stability.

Plan of Attack

“Upgrading” Komodo to implement the changes to the Bitcoin code base sounds great. But to do a simple git merge will not work. The extent of the changes is too great.

My idea is to merge in these changes in smaller chunks. Looking at the NodeContext object (a basic building block of a Bitcoin application), we can divide functionality into 3 large pieces.

  • P2P – The address manager, connection manager, ban manager, and peer manager
  • Chain – The chain state, persistence, and mempool
  • Glue – Smaller components that wire things together. Examples are the fee estimator, task scheduler, config file and command line argument processing, client (i.e. wallet) connectivity and functionality.

Building a NodeContext object will be the first step. Each piece will have access to an object of this type. This will provide the state of the system, as well as be where components update state when necessary.

The “glue” components often require resources from both “p2p” and “chain” components. Hence they should probably be upgraded last.

The task of upgrading the “chain” piece is probably smaller in scope than upgrading “p2p”. I will attempt to attack that first.

The “p2p” pieces do not seem to be too difficult. Most of the work seems to be in wrapping much of the functionality of main.cpp and bitcoind.cpp into appropriate classes. The communication between components are now better defined behind interfaces. The pimpl idiom is also in use in a few places.

Note: Bitcoin 0.22 requires C++17. For more information, click here.

Branching Strategy for Komodo Core

A bit of background…

The current branching strategy used in Komodo Core is somewhat informal. The idea is to formalize ideas so that everyone agrees on the procedures to follow, and those coming after us know what should be followed.

This is a living document. If the procedures below do not serve their intended purpose, they (or portions of them) should be replaced.

Why this and not that?

There are a number of branching strategies available. This Komodo Core branching strategy is heavily based on Git Flow. This was chosen because the existing strategies match the style of the code base (long lived branches, multiple versions deployed and supported, etc.).

How it works

There are a number of long-lived branches within the Komodo Core repository. The primary ones are:

  • master – The main production branch. This should remain stable and is where code for official releases are built.
  • dev – The development branch. This is where work is done between branches. Non-hotfix bug fixes and new features should branch off this branch.
  • test – The branch for the testnet. As the codebase stabilizes before a release, the code is merged from dev to test.
  • hotfix – After a release, a hotfix branch may need to be created to fix a critical bug. Once merged into test and master, it should also be merged into dev. These branches should not be long-lived.


Each release for testnet and production should be tagged. The versioning strategy is not currently part of this document.

Reviewing / Merging

Before a feature or fix is merged into a parent branch, reviews must be approved by repository maintainers. Anyone can review, but approvals from 2 repository maintainers must be completed.

Once approved, it is best (if possible) that the author of the feature or fix merges their branch into the parent branch. Once the merge is completed locally and all unit and CI tests pass, the code is pushed to the parent branch.

Note: It is best to not use the merge feature of the GitHub web interface. Perform the merge locally and push to the server.

Crypto Currency Exchange Connector v2

I began a personal project a long time ago to provide connectivity to various cryptocurrency exchanges. I never got around to finishing it. It did however stay in my mind. And while working on other projects, I gathered knowledge of better ways to implement different pieces.

Recently I began anew. The market has changed a bit, but the basics still hold true. There is a need for fast connectivity to multiple exchanges. Open-source “one size fits all” solutions that I researched did not work for me. They did not focus on speed and efficiency.

The Problems

Centralized exchanges are still the norm, although distributed exchanges (i.e. Uniswap) are making headway. While I believe distributed exchanges will be tough to beat in the long term, currently their centralized cousins provide the “on ramps” for the majority of the casual users. They also suffer from a dependence on their underlying host (i.e. Ethereum in the case of Uniswap). These are not impossible hurdles to overcome, but for now centralized exchanges play a large role in the industry.

Regardless of the centralized/decentralized issue, there are multiple exchanges. Those with connectivity to many of them can seek out the best deals, as well as take advantage of arbitrage opportunities. But this is a crowded field. Getting your order in before someone else becomes more important as bigger players squeeze the once juicy profits of the early adopters.

The Solution

A library that knows how to keep a connection to a variety of exchanges and provides a platform to develop latency-sensitive trading algorithms is the problem that the Crypto Currency Exchange Connector is attempting to solve.

The library is being written in C++. It will be modular to support features such as persistence, risk management systems, and the FIX protocols. But the bare-bones system will get the information from the exchange and present it to the strategy as quickly as possible. It will also take orders generated by the strategy and pass it to the exchange as quickly as possible. A good amount of effort has been put forth to make those two things performant.

Giving strategies the information needed to make quick decisions was the focus. Keeping strategies from the need of “boiler plate” code was a secondary goal that I feel I accomplished.

Compiling in only the pieces necessary was another focus. I debated for a good bit about making each exchange a network service you can subscribe to. That makes sense for a number of reasons, but it is an extra latency that I did not want to pay for. And having everything compiled into one binary does not mean it is impossible to break it out later. I wrote it with the idea that off-loading exchanges to other processes or servers would be possible, but having it all-in-one is the default.

Persistence is another issue. There is a ton of data pouring out of the exchanges. Persisting it is possible. But having it in a format for research often requires dumps, massages, reformatting, and sucking it into your backtesting tool. To alleviate some of those steps, I chose TimescaleDB. It is based on Postgres, and seems to work very well. Dividing tick data into chunks of time can be done at export time. I have yet to connect it to R or anything like that, but I am excited to try. That will come later.

Project Status

I truly do not want this project to be set aside yet again. I have a strong desire to get this across the finish line. And it is very close to functional. There are only a few pieces left to write before my first strategy can go live.

I will not be making this project open source. It is too valuable to me, and gives me an edge over my competitors. At best I will take the knowledge I’ve gained and apply it to other systems that I am asked to build.

Arduino is great, but…

Arduino and the Arduino IDE are great products. They serve as an on-ramp (and a very good one) for many makers. You can do quite a bit of tinkering, and even build commercial products without leaving the Arduino ecosystem. I give the many developers, volunteers, makers, etc. big kudos for their work.

It is just not for me. I’ve never been one that likes having development steps done for me. I need a lower level. I need to see the gears (rusty or greasy as they may be) turn. So an IDE that does everything for you is great. But at times it gets in the way. And that is when I start to look under the hood.

Let me be clear. I will probably never design my own chip. I am also not one to dig too far in the minutiae of manufacturer’s data sheets. I like a certain level of abstraction. But when something is not working as I expect, my instinct is to check my setup. And when the setup is all done for me, I quickly start groping at straws, and often waste time.

A recent experience solidified my desire to leave the Arduino IDE behind and go it alone. And by doing so, while it was investment of time, added significantly to my repertoire of skills. And it is something I believe is a worth-while investment for many developers.

In my case, I was not working on an Arduino. I was working with a chip and associated hardware that had been twisted to work with the Arduino IDE. That was a great way to get started. Again, a big thank you to all that worked on making that work. But when things get “twisted” to make it work, often things get lost, dropped, or forgotten.

Fortunately in my case, the manufacturer of the micro controller has a well-documented SDK and gcc toolchain. And while I have yet to remove all Arduino pieces of my code, (and may never do so) I can say I am now much more confident in what will happen when I tweak my linker script. I know what goes where and when it will be called. When all of the sudden a reboot is triggered, I am usually certain as to what area of the code is in need of attention.

If you know how to recover from accidentally “bricking” your device, I believe you will love stepping outside of the Arduino IDE and into the world of tool chains, linker scripts, master boot records, and boot loaders. Yes, it is frustrating at first, but the feeling of success at the end often makes it worth it.

That is my $0.02. As always, feel free to chime in with your opinion.

IoT and Smart Watches

There are plenty of avenues to enjoy some good old-fashioned coding. Lately, I’ve been doing some embedded work, and found a reason to explore smart watches. It seems quite a community is building around watches and fitness bands.

I purchased an inexpensive watch model that seems to have some grass-roots support around it. The P8. It was so inexpensive, I took a chance on its successor, the P9. They arrived today. The brain-dump and camera-dump will be built up here.

IoT – The marriage of software and hardware

I have always enjoyed electronics as a spectator. It continues to fascinate me. But the maturing trend of IoT captures my imagination because it blends electronics and software.

My career has mainly revolved around the major platforms. Windows, MacOS, Linux. I wrote large applications that ran on large servers or powerful desktops. Performance was a concern, but the cost of development often demanded to worry about hardware constraints only when something didn’t work.

Smartphones and other mobile devices drove developers to once again consider the hardware. Sure, there are development tools that hide some of the complexities of particular hardware. But when you get down to a board that has few microprocessors, limited RAM, little storage, and must run on a watch battery, the idea of adding a tool to hide complexities often means the tool is too big to fit on the device, let alone your code.

Robots have been used as a STEM tool in schools for a while now. There are people in the workplace now that have a much more solid base in electronics than I do. Imagine kids, my school had a computer in the library that had to be shared among the entire student body. Of course, I was one of the few that knew how to turn it on. But I digress..

The idea that the latest generation (and even a few before this one) are being exposed to IoT makes me happy. Learning such things while your mind is open to explore (and you have the time to do it) means more capable people with imaginations beyond the basics.

But that doesn’t mean that people from my generation can’t contribute. Just as the generation before us, we can certainly understand that we are where we are because of the shoulders we stood on. The question I ask myself is “How can I help?”

I’m no writer, and I’m no teacher. But I do have a good amount of experience, much of which only exists as anecdotes in my head. So it is time I do more thinking, learning, and writing.

My hope is that by writing more, I will explore more. I am looking toward IoT to expand my electronics knowledge, increase my coding knowledge, and learn how to best explain concepts.

These are lofty goals, but I’m hoping for the best.

Side project: Marine Weather Radar

I am researching open-source projects for marine weather radar.

The weather radar hardware manufacturers are proprietary. I understand why, and I’m not against it. But I would like to have my data in one place if I can. To do that, I need integration. That will affect my choice in purchasing hardware.

To research:

Hardware manufacturers of marine radar for pleasure craft. Do they provide an API? What equipment is necessary?

Open-source projects used for integration of systems in the marine environment.

What I’ve found:

OpenCPN seems to be the open-source software of choice for chart plotter navigation. It seems they have AIS (traffic) integrated among others. I have also seen integration with older radomes as part of the radarpi project.

The industry standard for networking devices together is NMEA2000, which is a protocol far too slow for radar data, but seems to work well for things like wind speed, gps position, depth, etc.

Signal K is an open source project for integration of NMEA2000 stuff into the PC. From what I’ve seen so far, they’ve put a lot of work into it.

The Ultimate:

A PC connecting to the boat’s WiFi network that connects to a server (perhaps running on a Raspberry Pi) that talks to boat systems.

Related links:

GnuRadar: Need to explore

Cruiser’s Forum: Post about someone wanting to do the same, but dated.

SI-TEX MDS 8R was (is?) a Radome that plugged into a PC’s ethernet port. Interesting, not sure if it continues to be in production.

After more research, it seems radar is going the ethernet route. More of them are going headless, and I am hopeful that standardization is coming soon.

Update 16-September-2020

radar_pi has done a lot of work on this already. It seems the tight grip that manufacturers have on their hardware limits (but does not eliminate) the usefulness of writing such code.

Manufacturers do sell the antenna separately from their head units. Whether a setup replacing their head unit with a generic one would cause things like voided warranties have yet to be seen.

To truly reverse engineer, build, and test such software would require the actual hardware. This post talks about someone working on such a thing with Furuno hardware.