Previously TicTac trees were of equal width at both levels. This seemed reasonably efficient way of sclaing up key sizes (e.g. at each increment both level widths doubled giving a 4 x increase in size).
However if we're to do comparisons with fetch_root followed by fetch_branches this might not be as efficient. As with a 1024 x 1024 tree a 1000 deltas requires the whole tree to eb exchanged, where as with a 4096 * 256 tree a 1000 deltas only requires a quarter of the tree to be sent over the network.
The whole tree, with a 4 byte hash size is 4MB in this case - requiring 30ms of network time on a Gbps link. There may be longer blocking of inter-node communication on LFNs (as per the tests in Meiklejohn's Partisan paper - https://arxiv.org/abs/1802.02652).
This changes makes TicTac tree tests run slower. But perhaps this is a price worth paying for smoothing out the potential impact on inter-node communication?
This might change back again. It is easier to make the KeyStore and TreeCaches to manage more directly the rebuild process. The only issue is whther this would "lock up" thevnode should the vnode ever wait on a response for example for a tree root (when the TreeCache is finishing the load)
The IMM iterator should not be reused, as it has already been filtered for a query. so if reused for a different query incorrect and unexpected results may occur.
This reuse had been stopped by a previous commit, and this cleans up subsequently unused code.
Originally had disabled the ability to lookup individual values when running in head_only mode. This is a saving of about 11% at PUT time (about 3 microseconds per PUT) on a macbook.
Not sure this saving is sufficient enought to justify the extra work if this is used as an AAE Keystore with Bitcask and LWW (when we need to lookup the current value before adjusting).
So reverted to re-adding support for HEAD requests with these keys.
Initial commit to add head_only mode to leveled. This allows leveled to receive batches of object changes, but where those objects exist only in the Penciller's Ledger (once they have been persisted within the Ledger).
The aim is to reduce significantly the cost of compaction. Also, the objects ar enot directly accessible (they can only be accessed through folds). Again this makes life easier during merging in the LSM trees (as no bloom filters have to be created).
Previously done at Slot Level - but Blocks were still read from disk after the Slot CRC had been checked.
This seems safer. It requires an extra CRC check for every fetch. However, CRC chekcing smaller binaries during the buld process appears to be beneficial to performance.
Hoped this will be an enabler to turning off compression at Levels 0 and 1 to improve performance (wihtout having a compensating issues with reduced CRC performance)