Distributed Adaptive Low Latency Blockchains

T


he ability of a database to distribute data to the querant without total synchronisation in a large blockchain database network creates a serious issue with the rate of transactions it can handle. In centralised distributed databases as the number of nodes goes up as well as the variance of their latency, a law of lowest common denominator rules.

Fundamental Principles of a Blockchain

First and foremost, a blockchain is a database. To be precise, it is a log. It records entries by their sequence in time and the important thing to consider is that each record represents some kind of branch connected to another record.

To certify a new transaction, the original most recent record upon which the supposed current must correctly correspond to establish the verified chain of custody that is the defining feature that makes it a bill of exchange, and the change splitting protocol that allows relatively arbitrary fractions possible, which establishes a verifiable protocol for splitting the bill, which makes it a functional cash asset.

The term Blockchain therefore means a fully replicated log format database with a graph structure. Bitcoin has two primary record types, a minted coin and a protocol for assignment that allows arbitrary splitting of tokens, Really it is a graph database and instead of tables you have leaf/branches and nodes.

The Problem of Synchronisation

The total data of a blockchain is not needed most of the time by most nodes, but you never know exactly who and where needs it.

The limit of transaction rates in most blockchains is tied to the propagation rate. Part of how blockchains like Steem and Dash accelerate this is with a second strata of high capacity nodes that are paid to keep the database synchronised as quickly as possible. But even with this the problem arises through the increase of nodes in the database that eventually you strike a latency limit caused by abbreviating the consensus building to a smaller network with maximal resources.

My Proposed Solution

The graph is not used uniformly by nodes at the edge of the network. Users individually have much smaller interest than the core network in the total data. The users have regular relationships with followers and followed, with their subaccounts, and of course feeds have a much smaller volume but wider distributed. So there is many mappings overlaying the data.

The solution is twofold:

Trusted Caching Nodes (that you are running or frequently using) keep up to date but they inform other nodes whuch nodes of the database they are interested in most frequently, as related to the queries they get.

Subscriptions to update events on database nodes can requested by other nodes. It is a bit like reusing connections in http transfers, the specific subset of broadcasts are rapidly propagated to subscribers. In this case, when the decision of who to relay packets to first must be made, the server can run a filter that adapts over time using a cache garbage collection strategy.

The former could be implemented so as to interface between applications and the blockchain, and use request history to inform other nodes how to prioritise their broadcast paths.

It maybe also somewhat resembles the operation of anycast routing, which forwards packets by priority minimal hop count to distributed subnets of a server. Instead it creates a pair of buckets, matching transactions are sent by priority to those requesting them most frequently.

With this kind of replication propagation, in the complex structure of blockchain nodes, data is propagated faster where related data is frequently requested. Instead of acting as an opaque cloud blackbox, the network leverages its complexity to shorten the paths and delays between nodes working on the same data.

Where now we have say 4s lag between sequential, stacked (on the same branch) transactions, the direct propagation paths keep input and outputs always as close together as possible.

Minimising Traffic

This strategy aims to minimise traffic on the network down to the shortest paths between interested parties. The rest of the blockchain needs to converge to consensus quickly but for fast transactions, this needs to happen more directly to allow the optimisation of propagation rate. Synchronising data to nodes clients are not requesting of makes no sense, this synchronisation should prioritise traffic to fit utilisation.

The tricky part has to do with filtering the data quickly so it actually gives an advantage. With a demand driven data replication strategy, only geographically separated new transaction counterparties will incur a higher latency, old connections, based on the frequency and recentness of queries, will be kept closer to where they are asked for. This allows the netwirk to be fully converged only ever to the degree it is needed. This will allow much lower transaction latency and notification and will scale more fluidly.


We can't stop here! This is Whale country!

Loki was born in Australia, now is wandering Amsterdam again after 9 months in Sofia, Bulgaria. IT generalist, physics theorist, futurist and cyber-agorist. Loki's life mission is to establish a secure, distributed layer atop the internet, and enable space migration, preferably while living in a beautiful mountain house somewhere with a good woman, and lots of farm animals and gardens, where he can also go hunting and camping.

I'm a thoughtocaster, a conundrummer in a band called Life Puzzler. I've flipped more lids than a monkey in a soup kitchen, of the mind. - Xavier, Renegade Angel

*

All images in the above post are either original from me, or taken from Google Image Search, filtered for the right of reuse and modification, and either hotlinked directly, or altered by me

H2
H3
H4
3 columns
2 columns
1 column
5 Comments