During EQC testing it was found that snapshots are still usable even
if the bookie process crashes. This change has snapshots monitor the
bookie and close when the bookie process dies.
Initial commit to add head_only mode to leveled. This allows leveled to receive batches of object changes, but where those objects exist only in the Penciller's Ledger (once they have been persisted within the Ledger).
The aim is to reduce significantly the cost of compaction. Also, the objects ar enot directly accessible (they can only be accessed through folds). Again this makes life easier during merging in the LSM trees (as no bloom filters have to be created).
Previouslythe tinybloom was used within the SST file as an extra check to remove false fetches.
However the SST already has a low FPR check in the slot_index. If the newebloom was used (which is no longer per slot, but per sst), this can be shared with the penciller and then the penciller could use it and avoid the message pass.
the message pass may be blocked by a 2i query or a slot fetch request for a merge. So this should make performance within the Penciller snappier.
This is as a result of taking sst_timings within a volume test - where there was an average of + 100microsecs for each level that was dropped down. Given the bloom/slot checks were < 20 microsecs - there seems to be some further delay.
The bloom is a binary of > 64 bytes - so passing it around should not require a copy.
Compression can be switched between LZ4 and zlib (native).
The setting to determine if compression should happen on receipt is now a macro definition in leveled_codec.
With basic ct test.
Doesn't currently prove expiry of index. Doesn't prove ability to find
segments.
Assumes that either "all" buckets or a special list of buckets require
indexing this way. Will lead to unexpected results if the same bucket
name is used across different Tags.
The format of the index has been chosen so that hopeully standard index
features can be used (e.g. return_terms).
When running a load of mainly 2i queries, there is a huge cost in the
previous snapshot code. The time taken to create a clone of the
Penciller (duplicating all the LoopState) varied between 1 and 200ms
depedning on the size of the LoopState.
For 2i queries, most of that LoopState was then being thrown away after
running the query against the levelzero_cache. This was taking < 1ms on
average. It would be better to avoid the o(100)ms of CPU burning and
block for o(1)ms - so th eorder of events have been changed ot filter
first so only the small part of the LoopState actually required is
copied to the clone.
Under assumption that it generates less GC noise (based on
micro-benchmark in leveled_tree eunit testing).
Note to confirm, needed to swap around the test order, and this showed
less collections in each position for skpl - and a < 10% performance hit
Going to abandond this branch for now. The change is beoming
excessively time consuming, and it is not clear that a smaller change
might not achieve more of the objectives.
All this is broken - but perhaps could get picke dup another day.
This is desirable to add back in going forward, but wasn't implemented
in a safe or clear way.
The way the bloom was or was not on the LoopState was clumsy, and it got
persisted in multiple places without a CRC check.
Intention to implement back in wherby it is requested on-demand by the
Penciller, and then the SFT worker lifts it off disk and CRC checks it.
So it is never on the SFT LoopState. Also it will be easier to control
the logic over which levels have the bloom in the Penciller.
This allows for deleted journals to be retained for a period (the
waste_retnetion_period). The idea being that a backup strategy can
ensure that all journals are backed up, even ones created and removed
from within a backup period - so that any restore pont is possible.
This is also a pre-cursor to removing some of the PromptDelete
complexity from the Inker Clerk - all compactions can prompt deletion as
deletion is now deferred.
There were issues with how the Penciller behaves under ehavy write
pressure - most particularly where there are a large number of keys per
update (i.e. 2i heavy objects). Most immediately the attempt to chekc
whether the l0 file was ready slowed down the process of producing the
L0 file - so back-pressure created more back-pressure.
Going forward want to alter this most significantly as also the work
queue can build up unsustainably. there needs to be some pausing
prompted by the bookie on 'returned', and the use of 'returend when the
work queue exceeds a threshold.
Changes the stratup otpions to a prolist to make it easier to get
environment variables as default.
Tried application:start - and completely baffled as to how to get this
to work.
There was some unpredictable performance in tests, that was related to
the amount of time it took the sft gen_server to accept a cast whihc
passed the levelzero_cache.
The response time looked to be broadly proportional to the size of the
cache - so it appeared to be an issue with passing the large object to
the process queue.
To avoid this, the penciller now instructs the SFT gen_server to
callback to the server for each tree in the cache in turn as it is
building the list from the cache. Each of these requests should be
reltaively short, and the processing in-between should space out the
requests so the Pencille ris not blocked from answering queries when
pompting a L0 write.
Further work on variable reload srategies wiht some unit test coverage.
Also work on potentially supporting no_hash on PUT to journal files for
objects which will never be directly fetched.
There was a test that failed to close down a bookie and that caused some
issues. The issues are double-reoslved, the close down was tidied as
well as the forgotten close being added back in.
There is some generla tidy around in anticipation of TTL support.
The penciller had the concept of a manifest_lock - but it wasn't clear
what the purpose of it was.
The updating of the manifest has now been updated to reduce the code and
make the process cleaner and more obvious. Now the committed manifest
only covers non-L0 levels. A clerk can work concurrently on a manifest
change whilst the Penciller is accepting a new L0 file.
On startup the manifets is opened as well as any L0 file. There is a
possible race condition with killing process where there may be a L0
file which is merged but undeleted - and this is believed to be inert.
There is some outstanding work still. Currently the whole store is
paused if a push_mem is received by the Penciller, and the writing of a
L0 sft file has not been completed. The creation of a L0 file appears
to take about 300ms, so if the ledger_cache fills in this period a pause
will occurr (perhaps due to objects with lots of index entries). It
would be preferable to pause more elegantly in this situation. Perhaps
there should be a harsh timeout on the call to check the SFT complete,
and catching it should cause a refused response. The next PUT will then
wait, but a any queued GETs can progress.
The object tag "o" which was taken from eleveldb has been an extended to
allow for specific functions to be triggered for different object types,
in particular when extracting metadata for stroing in the Ledger.
There is now a riak tag (o_rkv@v1), and in theory other tags can be
added and used, as long as their is an appropriate set of functions in
the leveled_codec.
An attempt to refactor out more complex code.
The Penciller clerk and Penciller have been re-shaped so that there
relationship is much simpler, and also to make sure that they shut down
much more neatly when the clerk is busy to avoid crashdumps in ct tests.
The CDB now has a binary_mode - so that we don't do binary_to_term twice
... although this may have made things slower ??!!? Perhaps the
is_binary check now required on read is an overhead. Perhaps it is some
other mystery.
There is now a more effiicient fetching of the size on pcl_load now as
well.
This exposed another off-by-one error on startup.
This commit also includes an unsafe change to reply early from a rolling
CDB file (with lots of objects writing the hash table can take too
long). This is bad, but will be resolved through a refactor of the
manifest writing: essentially we deferred writing of the manifest
update which was an unnecessary performance optimisation. If instead we
wait on this, the process is made substantially simpler, and it is safer
to perform the roll of the complete CDB journal asynchronously. If the
manifest update takes too long, an append-only log may be used instead.
Add some initial system tests. This highlighted issues:
- That files deleted by compaction would be left orphaned and not close,
and would not in fact delete (now deleted by closure only)
- There was an issue on stratup that the first few keys in each journal
would not be re-loaded into the ledger
Largely untested work at this stage to allow for the Inker to request
the Inker's clerk to perform a single round of compact based on the best
run of files it can find.
Some initial work to get snapshots going.
Changes required, as need to snapshot through the Bookie to ensure that
there is no race between extracting the Bookie's in-memory view and the
Penciller's view if a push_to_mem has occurred inbetween.
A lot still outstanding, especially around Inker snapshots, and handling
timeouts
Two aspects of pushing to the penciller have been refactored:
1 - Allow the penciller to respond before the ETS table has been updated
to unlock the Bookie sooner.
2 - Change the way the copy of the memtable is stored to work more
effectively with snapshots wihtout locking the Penciller any further on
a snapshot or push request