Going to abandond this branch for now. The change is beoming
excessively time consuming, and it is not clear that a smaller change
might not achieve more of the objectives.
All this is broken - but perhaps could get picke dup another day.
This is desirable to add back in going forward, but wasn't implemented
in a safe or clear way.
The way the bloom was or was not on the LoopState was clumsy, and it got
persisted in multiple places without a CRC check.
Intention to implement back in wherby it is requested on-demand by the
Penciller, and then the SFT worker lifts it off disk and CRC checks it.
So it is never on the SFT LoopState. Also it will be easier to control
the logic over which levels have the bloom in the Penciller.
Move to using the DJ Bernstein Magic Hash consistently, and trying to
make sure we only hash once for each operation (as the hash is more
expensive than phash2).
The improved lookup time for missing keys should allow for the L0 index
to be removed, and hence speed up the completion time for push_mem
operations.
It is expected there will be a second stage of creating a tinybloom as
part of the SFT creation process, and then adding that tinybloom to the
manifest. This will then reduce the message passing required for a GET
not in the cache or higher levels
Plugged the ne wpencille rmemory into the Penciller, and took advantage
of the increased speed to simplify the callbacks involved.
The outcome is much simpler code
Removed o(100) lines of code by refactoring the Penciller to no longer
use ETS tables. The code is less confusing, and probably not an awful
lot slower.
The CDB file management server has distinct states, and was growing case
logic to prevent certain messages from being handled in ceratin states,
and to handle different messages differently. So this has now been
converted to a gen_fsm.
As part of resolving this, the space_clear_ondelete test has been
completed, and completing this revealed that the Penciller could not
cope with a change which emptied the ledger. So a series of changes has
been handled to allow it to smoothly progress to an empty manifest.
There was a test that failed to close down a bookie and that caused some
issues. The issues are double-reoslved, the close down was tidied as
well as the forgotten close being added back in.
There is some generla tidy around in anticipation of TTL support.
The test confirming that deleting sft files wer eheld open whilst
snapshots were registered was actually broken. This test has now been
fixed, as well as the logic in registring snapshots which had used
ledger_sqn mistakenly rather than manifest_sqn.
File check now covered by measure in the sft_new path, whihc will backup
any existing file before moving.
This gets triggered by incomplete changes on shutdown.
The Penciller had two problems in previous commits:
- If it had a push_mem soon after a L0 file had been created, the
push_mem would stall waiting for the L0 file to complete - and this
count take 100-200ms
- The penciller's clerk favoured L0 work, but was lazy about asking for
other work in-between, so often the L1 layer was bursting over capacity
and the clerk was doing nothing but merging more L0 files in (with those
merges getting more and more expensive as they had to cover more and
more files)
There are some partial resolutions to this. There is now an aggressive
timeout when checking whther the L0 file is ready on a push_mem, and if
the timeout is breached the error is caught and a 'returned' message
goes back to the Bookie. the Bookie doesn't now empty its cache, it
carrie son filling it, but on some probability it will keep trying to
push_mem on future pushes. This increases Jitter around the expensive
operation and split out the L0 delay into defined chunks.
The penciller's clerk is now more aggressive in asking for work. There
is also some simplification of the relationship between clerk timeouts
and penciller back-pressure.
Also resolved is an issue of inconcistency between the loader and the on
startup (replaying the transaction log) and the standard push_mem
process. The loader was not correctly de-duplicating by adding first
(in order) to a tree before outputting the list from the tree.
Some thought will be given later as to whether non-L0 work can be safely
prioritised if the merge process still keeps getting behind.
Added basic support for 2i query. This involved some refactoring of the
test code to share functions between suites.
There is sill a need for a Part 2 as no tests currently cover removal of
index entries.
This test exposed two bugs:
- Yet another set of off-by-one errors (really stupidly scanning the
Manifest from Level 1 not Level 0)
- The return of an old issue related to scanning the journal on load
whereby we fail to go back to the previous file before the current SQN
Add iterator support, used initially only for retrieving bucket
statistics.
The iterator is supported by exporting a function, and when the function
is claled it will take a snapshot of the ledger, run the iterator and
hten close the snapshot.
This required a numbe rof underlying changes, in particular to get key
comparison to work as "expected". The code had previously misunderstood
how comparison worked between Erlang terms, and in particular did not
account for tuple length being compared first by size of the tuple (and
not just by each element in order).
Reviewing code to update comments revealed a weakness in the sequence of
events between penciller and clerk committing a manifest change wherby
an ill-timed crash could lead to files being deleted without the
manifest changing.
A different, and safer pattern now used between theses two actors.
An attempt to refactor out more complex code.
The Penciller clerk and Penciller have been re-shaped so that there
relationship is much simpler, and also to make sure that they shut down
much more neatly when the clerk is busy to avoid crashdumps in ct tests.
The CDB now has a binary_mode - so that we don't do binary_to_term twice
... although this may have made things slower ??!!? Perhaps the
is_binary check now required on read is an overhead. Perhaps it is some
other mystery.
There is now a more effiicient fetching of the size on pcl_load now as
well.
Two aspects of pushing to the penciller have been refactored:
1 - Allow the penciller to respond before the ETS table has been updated
to unlock the Bookie sooner.
2 - Change the way the copy of the memtable is stored to work more
effectively with snapshots wihtout locking the Penciller any further on
a snapshot or push request
CDB did many "bitty" reads/writes when scanning or writing hash tables -
change these to bult reads and writes to speed up.
CDB also added capabilities to fetch positions and get keys by position
to help with iclerk role.