What is the blockchain hard fork – missile crisis?, Excellent Wall of Numbers

Superb Wall of Numbers

Main menu

What is the blockchain hard fork “missile crisis?”

Over the past duo of months there has been a number of discussions revolving around enhancing the Bitcoin block size from its current one MB limit to twenty MB. One such plan is Gavin Andresen’s proposal (this is not to single him out as there are others with similar proposals). The code switch itself is trivial, as it can simply be switched to any arbitrary number in a duo of keystrokes (for example, see Vitalik Buterin discuss this at 14:15).

However, getting the majority of validating knots, miners and the rest of the ecosystem on-board in a timely style is a very non-trivial matter.

Recall that, as illustrated by Organ of Corti and Dave Hudson, the average block size has enlargened over the past year to the point where we will likely max out at around three transactions per 2nd with the current one MB limit. Since many of the investors, developers and entrepreneurs in this space would like to make Bitcoin ‘competitive’ to other payment platforms such as Visa, according to their view, this number eventually needs to increase by several orders of magnitude.

Fundamentally there are two trade-offs in block size economics:

  • Keeping a one MB block size requires higher fees to end-users but results in a more decentralized network
  • With a larger, twenty MB block size, fees are (temporarily) subsidized to end-users but with fewer validating knots on the network

A quick explanation of both:

  • Retaining a one MB block size ultimately results in higher transaction fees because block space is scarce and miners will only process and include transactions based on market-based prioritization rates (e.g., pay higher to be included swifter). While this would likely mean the end of certain types of transactions (such as “long chain” transactions) as well as fee-less transactions which have disproportionally enhanced the size of the blockchain over the past six months relative to actual commerce, at the same time this design decision would have the effect of retaining some nominal decentralization as the increase in blockchain size would remain relatively linear and thus the blockchain could be validated by several thousand knots as it is done today without (much) extra cost.

In early March 2014, there were approximately Ten,000 knots however over the past year there has been a decline by harshly 1/Three. What does this distribution of harshly 6,400 current knots look like?

Recall that the original value proposition of the Bitcoin blockchain was its decentralized characteristic, thus the more miners and validation knots that are geographically distributed, the less prone the network is to single-points of failure. Furthermore, while many people call the various artifacts that have enhanced the blockchain size “bloat,” because this is a public good and no one possesses it, it is imprecise to do so (e.g., one man’s eighty byte “trash” OP_RETURN is another man’s data storing “treasure“).

Whether consumers are sensitive to this switch in fees is another matter due to elastic request, they may simply switch over substitute goods (e.g., rivaling chains and ledgers). What does this mean exactly?

  • An increase to a twenty MB block size would likely proceed the same “low” fee (donation) structure practiced and promoted today as there is purportedly more room for non-priority transactions. The known challenge however is that if twenty MB blocks became “packed,” this would require a corresponding increase in bandwidth and disk space which would require more costs to be borne by the validating knots which are already operating as public goods. That is to say, a blockchain that enhanced in size by twenty MB every ten minutes would pack over one terabyte a year which would create extra costs for participants and likely reduce the amount of verification knots and therefore reduce the decentralization of the network.

The other challenge to Andresen’s plan is, that because the prioritization of transactions would still not be adjusting towards via fees to miners, this would in turn proceed the status quo in which miners proceed to largely rely on seigniorage to operate. This is an unhealthy trend as it stalls the transition from block prizes to fees which was the narrative stated since day one on October 31, two thousand eight (see section 6).

It is difficult to predict what exactly will happen as the key actors in this space are still determining what to use social capital on.

Gavin Andresen, as recently as two weeks ago, stated that most of the large payment processors, exchanges and other service companies are on-board with his plan (see also David Davout’s latest dialogue with Andresen). Furthermore, others in the community have (likely erroneously) found correlation inbetween market cap and transaction volume yet as we know, correlation does not actually imply causation. Similarly, ‘Death and Taxes’ recently introduced a narrative reinforcing Andresen’s view yet for some reason glossed over the all-important miners perspective. Others, such as in the ideological wing personified by Mircea Popescu claim that they will fight this effort with an actual attack.

Irrespective as to what size a block is enlargened to, it will likely create at least a makeshift fork as validating knots need to upgrade and they are not being compensated for storage and traffic (Andresen’s plan is to “future proof” the protocol such that the twenty MB switch is included in a patch this year but isn’t “turned on” until needed later on). There is at least one open question: what is the minimal amount of utter knots that are required for network to operate within current trust/security model? Unlike miners, their value to the system is hard to measure.

While the field is youthful, one experienced in this space is Jonathan Levin who modeled network propagation in his masters thesis. I reached out to him and in his view:

I think that the 20mb proposal is untenable given the current way that blocks are propagated around the Bitcoin network. The Bitcoin network and specifically the Bitcoin miners use a gossip network to relay blocks to each other. That means that as the size of the block increases, the time that it takes to spread around the network also increases linearly. We have seen this very first in the work of Decker and Wattenhofer as well as my own work.

The problem is that the enhanced time that blocks take to propagate around the network increase the probability of orphan races inbetween different mining pools. If you create blocks that are 20mb and a contesting pool is creating blocks under 1mb or even empty ones, they have a higher expected come back per hash. This is because you would expect your blocks to lose out to smaller blocks in an orphan race if both are found in quick succession. Now we can argue that miners will proceed to create large blocks out of altruism but if we proceed to increase the size of the blocks without greater utilisation of better block relaying protocols we risk violating this equilibrium and miners resorting to nasty strategies like creating empty blocks which suit no one.

I also spoke with several other professionals in this space.

For example, I spoke with Atif Nazir, co-founder of Block.io and an instructor at Blockchain University. According to him:

On the one mitt, enhancing block sizes, as you say, may result in lower transaction fee requirements. However, if the transaction fees actually are lowered by, say, 1000x what they are now (0.00001 is the minimum accepted by the reference client), this will lower the cost of “institutional attacks” on the Bitcoin infrastructure, where an attacker can shove one thousand transactions for an erstwhile cost of 1. The attack will basically be “make infrastructure expensive to run for the average joe, drive them towards centralized infrastructure services that run APIs, Blockchain Explorers, etc.” It is good for business, bad for the decentralization of the network in the near term.

We’ve seen something like this occur on the Dogecoin Network in the past few months, where one user or a group of individuals were pushing transactions with zero transaction fees. These transactions were accepted as valid by the Dogecoin reference clients, and as a result, caused bandwidth consumption hikes for the dorm-room knots, which populate most of the current network(s). The resulting switch by the Dogecoin Core team was to add a fee of 1.0 DOGE for every transaction, which isn’t yet mandatory, but is on its way there. The dorm-room knots, however, are already on the decline in both Bitcoin and Dogecoin due to the enhancing size of the Blockchain, and the bandwidth consumed by them.

Enhancing the Block sizes sounds like a good idea for the number of transactions flowing on the network, but in the near term it will drive a lot of the knots out of the system because of CPU/bandwidth/disk IO hikes. Enhancing the Block sizes will undoubtedly increase infrastructure costs, driving more users towards centralized places that can afford to host API services for the Blockchain. However, given this crunch on the average joe Bitcoin knots, this will lead to a more concentrated effort towards “pick what you need” style knots (say, SPV). Again, in the near term, the number of “total knots” on the network will dwindle, but as more companies come into the ecosystem, this number will inevitably rise.

Bitcoin as a entire is headed towards a network where most knots don’t actually host the entire Blockchain — enhancing the block size will only accelerate this switch. This will lead to more innovative solutions, and who knows, we might find a way for knots to communicate cost-effectively rather than the current “gossip”-style protocol we use, where you inform all your peers when you hear about a fresh transaction. The community can very dynamic, and I think the longer term outlook for the network looks good regardless. Bitcoin is powered by nerds like you and I, and we tend to find solutions where others walk away.

Nazir raises an interesting point in terms of a hypothetical time horizon for when a transition (inbetween brief term and long term) could take place.

Another individual who has done a lot of modeling of incentives, mining and block sizes is Dave Hudson, a software developer who also writes at HashingIt. According to him:

Switches to the distributed consensus software within Bitcoin raise truly interesting questions about the evolution of cryptocurrencies and how truly decentralised they truly are. With each switch we’re actually watching something interesting happen where the ongoing participants in the system all effectively agree to budge to a fresh system: BTC becomes BTC’ becomes BTC”, etc. We might be calling BTC” Bitcoin but any legacy knots running BTC’ or BTC also think they’re Bitcoin too. At some point in time something happens and the various systems embark to disagree about what is or isn’t valid and those could be very subtle. Imagine for example that BTC” introduced a subtle switch that inadvertently made some of Satoshi’s coins unspendable; nobody might ever know until someone with Satoshi’s keys attempts to spend their Bitcoins. Arguably it might already have happened as the result of some random compiler bug (not a fault in the Bitcoin-core code, but a bug in the way that’s transformed into something that runs on the knot CPUs).

Clearly the Bitcoin-core developers attempt very hard to ensure that this sort of thing doesn’t happen by accident, but in order to sustain all participants holdings within the system they truly do have to attempt to ensure that every knot moves from BTC to BTC’ to BTC”, etc. In order to do this they essentially have to persuade everyone to migrate to each fresh version within some specific time window.

Now let’s imagine for a moment that instead of miners all tending to mine through centralised infrastructure (mining pools), that we indeed did have true decentralisation and had hundreds of thousands, or millions, of knots that all did their own transaction selection and mining. Perhaps they’re even embedded into things that their users didn’t even realise were contributing to mining. At this scale it would most likely be almost unlikely to get them all to stir to adopt a planned fork. We would either see the protocol totally stagnate or else we would see potentially very significant forks occurring.

In practice the system holds together in a cohesive way because, in the absence of a precise protocol spec, the core devs attempt to ensure that everyone uses the same consensus-critical software, runs it on the same sorts of hardware that all do things the same way and with some reasonably consistent set of capabilities.

It’s seems a slight irony that one of the key factors in the successful maintaining and sustaining of the Bitcoin network is continual centralised deeds, and that things aren’t actually massively decentralised.

This last point is intriguing in that a lot of the software in this space is still relatively homogeneous and that if a network were to scale to become as distributed (or decentralized) as is hoped while at the same time incorporating many knots and clients, then it is likely that a diverse set (or lackthereof) of developer devices could prevent or perhaps even incentivize attacks (e.g., if every actor in the ecosystem uses the same client then that could create a vulnerability to the network).

In an exchange with Peter Todd, a contributor and developer on Bitcoin core and other related protocols (such as ClearingHouse), he framed the issue:

At the latest O’Reilly Media conference basically I pointed out that because this is an externality / tragedy-of-the-commons problem we may have to see Bitcoin fail due to a blocksize increase very first before the community actually groks the issue. Personally I’m inclined to not oppose a blocksize increase on this grounds – Bitcoin failing cleanly is very likely good for my interests.

In terms of “getting people on board” – to a degree you inherently can’t do this, because a blocksize increase will inherently exclude people from the system. See for example the discussion inbetween Greg Maxwell and Gavin Andresen several weeks ago on the #bitcoin-dev IRC channel.

I spoke with Robert Sams, co-founder of a fintech startup who has previously written analysis covering the marginal costs of Bitcoin-like systems. In his view:

Levin’s point about network propagation is key: mining a larger block has a lower expected comeback because of the enhanced probability of losing out to a smaller block in an orphan race.

Now all of what you argue is a totally sound economic conjecture based on the assumption of distributed mining economics. Miners include tx until the marginal cost of tx inclusion (chance cost of including a different tx when up against the block limit + block propagation effect) equals marginal revenue (the fee).

However, for me the crucial economic force here is what happens to fees under concentrated mining. The logic switches from the marginal costs equals the marginal revenue logic in the above distributed case to a more strategic, oligopolistic pricing dynamic. What I mean is this. In the distributed case, whether or not a given miner includes a given tx has no material effect on the expected confirmation time for the tx sender. But in the concentrated mining screenplay it does. If some pool is 35% of the network, the decision by that pool to not include the tx will materially increase the confirmation time of that transaction. So miners can extract more of the value that a tx senders place on prompt confirmation times by setting their own minimum fee threshold, knowing that this threshold will over time effect the fees that tx senders include. What that optimal threshold is depends upon how much senders are willing to pay for quicker tx confirmation times. Who knows what that is, but the implication is clear: under concentrated mining, fees levels will begin to reflect more what tx senders are willing to pay rather than the cost to miners of including them.

So when you cast the blocksize issue in this concentrated mining context, it’s truly not clear what will happen. My bets are that fees will go up and we won’t have to worry about blocksizes because higher fees will act as a break on adoption.

If block sizes are enlargened we will learn a lot about the dynamics of the community, the interplay inbetween incentives such as fees and seigniorage have for on-boarding (and off-boarding) miners as well as how price sensitive users are in this space.

Ultimately it is the miners who determine as they are the entities creating Sybil protection and preventing double-spend attacks (or in some cases, providing that service). Or as Raffael Danielli, a quantitative research analyst at ING explained:

In theory, fee prizes should incentivize miners to include as many transactions as possible. In reality however fee prizes are a little percentage of block prizes and the risk-rewards ratio simply doesn’t add up at the moment (venturing a (almost) sure twenty five BTC payoff to get a potential say 25.1 BTC). What are the rational incentives for miners to upgrade and actually pack 20mb blocks? At the moment there are none that I am aware of. If there are no incentives for miners then this is not going to happen. Period. There is no altruism when it comes mining and anyone who bets on it is in for a rude awakening.

But this crosses over into the fresh field of cryptoeconomics which is a topic for another day.

[Thanks to Anton Bolotinksy for his thoughts on measuring the value of knots within the system.]

I would add that there is a downward pressure on block size for block makers. I’ve done some research with Nadi Sarrer that proves the larger the block, the longer propagation takes. Even if a pool uses the relay network, enlargened latency also increases the chance of a pool losing an orphan race.

So block makers have to determine how to maximise fees while at the same time minimising block size. Some, like Discus Fish (f2pool) have tested both minimum block size (only including coinbase tx) and maximum block size, and lately seem convenient producing maximum sized block each time. (They also seem to have a ‘pay for tx inclusion’ scheme here, but I don’t know much about it)

I think eventually pools will aim to use a decision making algorithm to:

a) Pick a block size they think will make losing an orphan race less likely.

b) Include all available high fee density (fee/kb) transactions in the block

c) then include high fee transactions

d) any left over space can be given to low and zero fee txs

With more data, this sort of process could be optimised to calculate the expected value of a block including the probability of losing orphan races. This would only lead to larger blocks when the value of the included txs outweighed the losses due to orphan races in the long term.

Of course, if all block makers had the same sized blocks, this would not be an issue. But if a block maker can win an orphan race by the expedient of having a smaller block, then they will.

Some open questions for the community: How will fewer network knots affect orphan races? If the blocks are solved many seconds apart, I would think that fewer network knots will mean fewer orphan races since the time for a block to propagate to most of the network will reduce significantly. However, if the blocks are solved at the same time, an orphan race might be more likely since the paths taken by the blocks propagating will have less affect on the overall propagation time. Which do you think is more likely?

In summary: If block makers are rational actors and the risk of losing orphan races is a significant downward pressure on block size, I don’t think enlargening the available block space will have a significant effect on actual block size. There’s a lot of room for improvement in the tx inclusion algorithms used by most pools, and if I was a block maker I would increase the fee density of blocks and include far fewer low-fee and fee-free txs.

Share the post “What is the blockchain hard fork “missile crisis?””

Related video:

Leave a Reply