Not sure if this scenario is unlikely or impossible - but filtering it seems harmless.
Abstracting out function makes testing of scoring scenarios a bit easier.
Catch-up on setting specs for external functions in the iclerk.
If wast retention period is undefined, then it should be ignored - and no waste retained (rather than retaining waste for 24 hours as at present).
This wasn't working anyway - as reopen reader didn't get the cdb options (which didn't have the waste path on anyway) - so waste would not eb retained if the file had been opened after a stop/start.
Compression can be switched between LZ4 and zlib (native).
The setting to determine if compression should happen on receipt is now a macro definition in leveled_codec.
Provide a function for generating segmentfilter lists so that it can handle trees that are "too small".
Test those smaller trees - plus also false positives and cold caches
Note that accelerating segment_list queries will not work for tree sizes smaller than small. How to flag this up?
Should smaller tree sizes just be removed from leveled_tictac?
Initially with basic tests. If the SlotIndex has been cached, we can now use the slot index as it is based on the Segment hash algortihm.
This looks like it should lead to an order of magnitude improvement in querying for keys/clocks by segment ID.
This also required a slight tweak to the penciller keyfolder. It now caches the next answer from the SSTiter, rather than restart the iterator. When the IMMiter has many more entries than the SSTiter (as the sSTiter is being filtered but not the IMMiter) this could lead to lots of repeated folding.
When leveled is used with Riak, buckets and keys are always binaries. So we can treat them as such.
Want to move tictac tree testing away from the leveled internal tests, to a set of tests for the Riak scenario. so riak_SUITE created for this and other riak-specific backend tests.
Use 4 keys in the bloom (which is closer to optimal size). This should halve the fpr - as we cna now use the large ExtraHash rather than being constrained by the SegmentHash here.