“Drift correction” and the ASERT DAA

This is just a note with my personal takes on some topics that have come up in the Bitcoin Cash community’s recent difficulty adjustment algorithm (DAA) debate, especially around Bitcoin ABC’s Grasberg DAA, its “drift correction” feature, and how drift correction relates to Mark Lundeberg’s ASERT DAA, championed by Jonathan Toomim and others (like zawy and me):

  1. What is drift correction?
  2. Why would we want it?
  3. Does ASERT do it?
  4. Does ASERT reference a recent block, or the genesis block?
  5. Should the DAA target historical avg block time, or current avg block time?
  6. Is drift correction desirable in a DAA?
  7. How else could ASERT be tweaked to do drift correction?


  1. This is an active research area and I may be wrong about some things! I do at least have a longstanding interest in DAAs and am one of the co-inventors of ASERT, though Mark analyzed it more deeply.
  2. This document is a draft and will probably be revised.

Drift correction is the idea to nudge future block times to be longer or shorter than the ideal target (typically 10 minutes), to compensate for historical straying from that ideal. Eg, as of this writing BCH’s historical average block time is 9m25s, and BTC’s is 9m29s. This is for a mix of historical reasons, including the BCH EDA misadventures of 2017, but most obviously just that hashrate has risen pretty steadily (and in total, massively) since 2009: so on average blocks have been found faster than the various DAAs expected. So a DAA with drift correction will try to make current blocks average at least a little longer than 10 minutes, to steer the historical average back towards 10 minutes.

As I understood deadalnix from yesterday’s call, his reasoning is that the main other DAA being proposed for BCH right now, ASERT, already does a form of drift correction: so if we’re going to make this magnitude of change to the DAA (past changes were essentially tweaks), we should try to get it right. In particular, if we’re going to correct drift, we should correct it relative to the genesis block: it seems ugly for our DAA to do something like “correct any drift such that the average block time since block 649887 is 10m0s, even though the average before that was 9m25s.”

These intuitions at least makes sense. When we make larger changes we should try to get them right, and if we have to pick a block to correct drift relative to, the genesis block is the cleanest choice.

There are some other potential benefits: eg, drift correction would make the wall clock times of future (especially far-future) block heights more predictable, which would make chain fork times more predictable — I remember toying with drift-correcting DAAs around 2017 for this reason. But these benefits seem small compared to the downsides of introducing such a change at this stage.

No. This is a misunderstanding: ASERT is memoryless — it doesn’t care about the past. If you suddenly increase hashrate by 100x, ASERT will let you mine a bunch of blocks very quickly, increasing difficulty until each new block is taking you 10 minutes on average. Then it will happily stabilize at the new increased difficulty, making no effort to compensate for the fast blocks, or the historical avg block time < 10 min that they caused.

In particular, if we’d been using ASERT since genesis, the historical avg block time would probably still be below 10 minutes, due to the historical climb in hashrate. But much closer to 10 min than it is now, since previous DAAs have been lousy at pulling block time back to 10 min.

ASERT wasn’t designed to correct drift, or even to be memoryless. I believe the original motivation for ASERT’s exponentially-fading weighting of past blocks was just that this was a simple accurate-ish way to estimate recent hashrate, which is an accurate-ish way to estimate current hashrate — which is what most DAAs are trying to do. And exponential weights happen to imply the memoryless property.

Either or neither. This question again stems from a misunderstanding, like asking “Is the temperature 95°F or 35°C?” — they’re just two different yardsticks for the same thing.

To me the simplest way to think about ASERT is that it calculates difficulty from two changing inputs — the last block’s timestamp and block height — and three fixed parameters:

  1. Ideal block time, eg 10 minutes
  2. Responsiveness (aka τ): a measure of how quickly difficulty adjusts when hashrate/block times change
  3. A difficulty anchor. To calculate the intended difficulty as of current time t, ASERT needs to know what the difficulty (or its inverse, “target”) was at some time in the past — any time! Eg, any of these will do:
    - Difficulty of the most recent block (the original “relative” formulation)
    - Difficulty as of the genesis block (Mark’s “absolute” formulation)
    - Difficulty as of some fixed block height we hardcode, eg the height of the DAA hard fork

Any of these anchors will fit into ASERT’s core logic: “Take the difficulty D₀ as of the anchor time t₀; decrease it by a small constant factor for every second that’s passed since t₀; and increase it by a larger constant factor for every block that’s been found since the anchor height h₀.” Or in math:

D = D₀ × e^((600(h - h₀) - (t - t₀))/τ)

In short, ASERT isn’t rooted at any specific block. (The outside temperature doesn’t reference Celsius or Fahrenheit — that’s just how we choose to describe it.) A particular implementation will usually refer to a specific block: but even then, it must also specify the difficulty as of that block.

One wonders, what’s the source of this confusion? My guess is it’s that previous DAAs calculated from a bunch of recent blocks — eg, taking the average of the block times of the last 144 blocks — whereas ASERT implementations refer to a single block height/time (and difficulty). But in fact that anchor is just one way of defining the difficulty function, just as 4y = 6x + 14 and 10y = 15(x-10) + 185 both define the same function.

The reference block does make a difference in DAAs that do drift correction, like Grasberg: nudging block times to make the average since genesis 10 minutes, will result in different difficulties than nudging to make the average since some recent hard fork 10 minutes. But ASERT is not such a DAA.

Addendum: Zawy replied with one meaningful sense in which ASERT can be dependent on a specific block: if the form of difficulty anchor you choose to use is a “real-world” block (“Calculate current difficulty relative to the actual time and difficulty of block X”). In particular, as deadalnix said, “ASERT in its absolute form using the genesis block as reference would jack up the difficulty so high due to pre-existing drift that it is a complete non-starter.” But I’d argue that requiring ASERT to be anchored by a real-world past difficulty, particularly the genesis block’s, is arbitrary. The difficulty number Satoshi launched with was basically random and has no relevance to 2020 mining: few would consider it an important part of Bitcoin’s social contract.

That social contract point is debatable. But the more concrete technical point is that the exact same ASERT behavior can be produced by specifying any of infinitely many (time, difficulty) anchor pairs: which of those identical-behaving pairs you choose should just be a matter of code convenience.

IMO, current. As far as I know practically no one cared about drift until this month: probably a few have here and there, but no one has been greatly inconvenienced by historical drift, whereas users are significantly inconvenienced by BCH’s current high-variance block times (difficulty/hashrate oscillations) — that’s the actual pressing problem to solve. Also, from a change-minimization pov, all DAAs deployed up till now (including Satoshi’s, who had many chances to change it) have aimed to keep upcoming block times 10 minutes, ignoring historical drift.

It’s important to note that you can’t have it both ways. With an 11.5-year historical avg block time significantly below 10 minutes, correcting drift means making upcoming avg block times significantly above 10 minutes for a while.

IMO, no. Doing drift correction means prioritizing a 10-minute historical avg block time over a 10-minute current avg block time, and as discussed above I think that’s wrong way round.

If I did want to tweak ASERT to do drift correction, without adding too much complexity and without much worsening its performance in Jonathan & Zawy’s DAA simulations, I’d probably just take a weighted average of ASERT’s target block time and the ideal drift-correction block time. Eg:

drift_weight      = 35e-7
target_block_time = (1 - drift_weight) * asert_target_block_time
+ drift_weight * block_time_to_correct_all_drift

This would result in an initial average block time of about 11m15s (like Grasberg), gradually easing down to 10 minutes over the years as the historical avg block time climbed up to 10 minutes, and block_time_to_correct_all_drift shrunk from its current ~22,590,000 to ~0. Or a weighted geometric average might have advantages — either way.

But in any case I don’t actually recommend adding drift correction to ASERT.



Cryptocurrency enthusiast with a background in software development, finance and teaching. @JaEsf on Twitter, work http://calibratedmarkets.com/.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jacob Eliosoff

Cryptocurrency enthusiast with a background in software development, finance and teaching. @JaEsf on Twitter, work http://calibratedmarkets.com/.