Test takes a long time due to sleep (still need to work on that), but also FoldKeysFun uses ++ rather than [{B, K}|Acc] to extend the list. Order of magnitude speed-up for these queries by changing the way this accumulates
ets:next is not StartInclusive - so we shouldn't use ets:next when querying the ledger cache, until it has been confirmed thta a matching key to the StartKey is not present.
This reoslves the intermittently failing unit test which highlighted this (and also makes that intermittent failure a permanent failure by expanding the test cases it covers).
the concept of the segment hash belongs to the leveled_tictac module, not the codec.
Previously the alignment of tictac and store was accidental, this change makes it explicit.
Previously there was no is_empty check, and there was a workaround using binary_bucketlist. But what if there were many buckets - this is a slow seek (using get next key over and over).
Instead have a proper is_empty check.
Previously TicTac trees were of equal width at both levels. This seemed reasonably efficient way of sclaing up key sizes (e.g. at each increment both level widths doubled giving a 4 x increase in size).
However if we're to do comparisons with fetch_root followed by fetch_branches this might not be as efficient. As with a 1024 x 1024 tree a 1000 deltas requires the whole tree to eb exchanged, where as with a 4096 * 256 tree a 1000 deltas only requires a quarter of the tree to be sent over the network.
The whole tree, with a 4 byte hash size is 4MB in this case - requiring 30ms of network time on a Gbps link. There may be longer blocking of inter-node communication on LFNs (as per the tests in Meiklejohn's Partisan paper - https://arxiv.org/abs/1802.02652).
This changes makes TicTac tree tests run slower. But perhaps this is a price worth paying for smoothing out the potential impact on inter-node communication?
This might change back again. It is easier to make the KeyStore and TreeCaches to manage more directly the rebuild process. The only issue is whther this would "lock up" thevnode should the vnode ever wait on a response for example for a tree root (when the TreeCache is finishing the load)
The IMM iterator should not be reused, as it has already been filtered for a query. so if reused for a different query incorrect and unexpected results may occur.
This reuse had been stopped by a previous commit, and this cleans up subsequently unused code.