Allow tictac tree sizes to be flexible.
Tested lots of different sizes. Having both level 1 and level 2 the
same size seemed to be consistently quicker than trying to make either
of the levels relatively wider.
There's an 8% performance improvement if the SegmentCount is reduced by
a quarter.
Naming is now confusing now we have TicTac Trees. This query builds a
list of keys and hashes not a tree - so it was misleading anyaway. Now
renamed hashlist_query.
Need to allow specific settings to be passed into unit tests.
Also, too much journal compaction may lead to intermittent failures on
the basic_SUITE space_clear_on_delete test. think this is because
there are less “deletes” to reload in on startup to trigger the cascade
down and clear up?
the new code requires bucket listing to be on binary keys not just
binary buckets. As this is only intended for use within Riak (where
all keys are buckets are binaries), this constraint seems OK.
A test needed changing to ensure it had a binary key in the bucket.
This at least checks the file is present, and the Key exists in the
index of that file. If the value is corrupt it will be removed by
compation, and then this will fail (unless the file is never compacted).
TODO: resolve issus of files which are corrupt - but never compacted
- a job for backup?
fold objects which snaps in the fold was implemented incorrectly - it
took information from the LedgeCache at the point of the request, not
at the point of the fold. So the LedgerCache SQN may have been
surpassed in the Penciller by the time the fold was called.
When rolling we already know the last_key - no need to seek for it on
startup.
The time it takes for this seek needs to be considered with regards to
startup time. Can we do without knowing lastkey?
This allows for deleted journals to be retained for a period (the
waste_retnetion_period). The idea being that a backup strategy can
ensure that all journals are backed up, even ones created and removed
from within a backup period - so that any restore pont is possible.
This is also a pre-cursor to removing some of the PromptDelete
complexity from the Inker Clerk - all compactions can prompt deletion as
deletion is now deferred.
Leveled will now signal the need for a pause due to back-pressure, but
not actually pause itself. The hope is that in a riak implementation
this pause can be managed by the put_fsm, and so not lock the store.