Compression can be switched between LZ4 and zlib (native).
The setting to determine if compression should happen on receipt is now a macro definition in leveled_codec.
Use 4 keys in the bloom (which is closer to optimal size). This should halve the fpr - as we cna now use the large ExtraHash rather than being constrained by the SegmentHash here.
More entropy by using the position index with the segment hash - so this would be a better filter to apply.
Also could increase the key count now, as extra hash can be larger.
As an aside - a leveled_iclerk unit test failure appeared - the range was just wrong. Don't know why this strated happening
Discovered a bug with search ranges in leveled_tree - this was uncovered by an intermittently fialing 19.3 test.
Test case added and bug fixed. It was due to a fialure to use end_key passed causing issues with particular manifests and full bucket ranges.
Switch from magic hash to md5 - to hopefully remove the need for some
of the artificial jumps required to get expected fall positive ratios.
Also split the hash into two 16-bit integers. We assume that SegmentID
(from the perspective of AAE merkle/tictac trees) will always be at
least 16 bits. the idea is that hashes should be used in blooms and
indexes such that some advantage can be gained from just knowing the
segmentID - in particular when folding over all the keys in a bucket.
Performance testing has been difficult so far - I think due to “cloud”
mysteries.
Introduce a dedicated module for all the different fold types. Also simplify the list of folders by deprecating those folds that should eb achieveable by fold_heads/fold_objects type folds but with smarter functions.
Makes sure that the fold functiosn also have better spec coverage, and are dialyzer checked.
When new trees are initialised they are started with 1 byte binaries at Level2 - and become full-size following a merge or add event.
The idea is that when trees are distributed before they are added to, or when over-sized trees are used - the output may be smaller on the network.
As descibed in https://github.com/martinsumner/leveled/issues/92
Only the first fix was made.
Just to eb safe - archiving means renaming to another file with a different extension. Assumption is that renamed files cna be manually reaped if necessary.
Because there's no sensible way of using it if objects are mutable - you still end up with the same false positives in the tictactree.
Didn't fully rollback the change as spec and docs were added which chould be useful going forward.
This is an interim stage towwards enhancing the proxy object so that it contains more helper information (other than size).
The aim is to be able to run more efficient fold_heads queries that might filter on LMD range (so as not to have to co-ordinate the running of comparative queries). For example if producing a tictactree to compare between two different offsets, a max LMD could be passed in so that changes beyond the time the first query was requested can be ignored.
Idea being that sometimes you may wish to compare a tictac tree between leveled and something that doesn't understand erlang:phash or term_to_binary. So allow the magic_hash to be used instead - and perhaps an extract function that does base64 encoding or something similar.