Test added for the "retain" recovery strategy. This strategy makes sure
a full history of index changes is made so that if the Ledger is wiped
out, the Ledger cna be fully rebuilt from the Journal.
This exposed two journal compaction problems
- The BestRun selected did not have the source files correctly sorted in
order before compaction
- The compaction process incorrectly dealt with the KeyDelta object
left after a compaction - i.e. compacting twice the same key caused that
key history to be lost.
These issues have now been corrected.
The unit tests for the Penciller couldn't cope with the returned status
- and so would intermittently fail (after tightening the timeout on sft
check_ready.
The CDB file management server has distinct states, and was growing case
logic to prevent certain messages from being handled in ceratin states,
and to handle different messages differently. So this has now been
converted to a gen_fsm.
As part of resolving this, the space_clear_ondelete test has been
completed, and completing this revealed that the Penciller could not
cope with a change which emptied the ledger. So a series of changes has
been handled to allow it to smoothly progress to an empty manifest.
The no_hash option in CDB files became too hard to manage, in particular
the need to scan the whole file to find the last_key rather than cheat
and use the index. It has been removed for now.
The writing to the journal during journal compaction has now been
enhanced by a mput option on the CDB file write - so it can write each
batch as one pwrite operation.
Further progress towards the tidying up of basement tombstones in the
Ledger, with support added for key-listing to help with testing (and as
a potentially required feature).
The test is incomplete, but committing at this stage as the last commit
broke some tests (within the test code).
There are some outstanding questions about the handling of tombstones in
the Journal during compaction. There exists a condition whereby values
could return if a recent journal is compacted and tombstones are removed
(as they are no longer present), but older journals have not been
compacted. Now on stop/start - if the Ledger is wiped the removal of
the keys will be forgotten but the original PUTs would still remain.
The safest thing maybe to have rule that tombstones are never deleted
from the Inker's Journal - and accept the build-up of garbage. Or there
could be an addition to the compaction process that checks back through
all the inker files to check that the Key of a tombstone is not present
in the past, before it is removed in the compaction.
There was a test that failed to close down a bookie and that caused some
issues. The issues are double-reoslved, the close down was tidied as
well as the forgotten close being added back in.
There is some generla tidy around in anticipation of TTL support.
Prepare SFT files for handling tombstones correctly (without expiry
dates).
Also some work as it can be seen from tests that some SFT files ar enot
be cleared out correctly. Pausing before trying t clear out the fles to
experiment and trial the possibility that there is a timing issue.
The test confirming that deleting sft files wer eheld open whilst
snapshots were registered was actually broken. This test has now been
fixed, as well as the logic in registring snapshots which had used
ledger_sqn mistakenly rather than manifest_sqn.
Recent fixes have been made to problems associated with rapidly changing
objexts especially on re-opening of the bookie. Test of rotating
objects from both an index query and a fetch perspective added to better
detect such issues in the future.
The penciller had the concept of a manifest_lock - but it wasn't clear
what the purpose of it was.
The updating of the manifest has now been updated to reduce the code and
make the process cleaner and more obvious. Now the committed manifest
only covers non-L0 levels. A clerk can work concurrently on a manifest
change whilst the Penciller is accepting a new L0 file.
On startup the manifets is opened as well as any L0 file. There is a
possible race condition with killing process where there may be a L0
file which is merged but undeleted - and this is believed to be inert.
There is some outstanding work still. Currently the whole store is
paused if a push_mem is received by the Penciller, and the writing of a
L0 sft file has not been completed. The creation of a L0 file appears
to take about 300ms, so if the ledger_cache fills in this period a pause
will occurr (perhaps due to objects with lots of index entries). It
would be preferable to pause more elegantly in this situation. Perhaps
there should be a harsh timeout on the call to check the SFT complete,
and catching it should cause a refused response. The next PUT will then
wait, but a any queued GETs can progress.
To try and improve performance index entries had been removed from the
Ledger Cache, and a shadow list of the LedgerCache (in SQN order) was
kept to avoid gb_trees:to_list on push_mem.
This did not go well. The issue was that ets does not deal with
duplicate keys in the list when inserting (it will only insert one, but
it is not clear which one).
This has been reverted back out.
The ETS parameters have been changed to [set, private]. It is not used
as an iterator, and is no longer passed out of the process (the
memtable_copy is sent instead). This also avoids the tab2list function
being called.
The 2i work now has tests for removals as well as regex etc.
Some initial refactoring work has also been tried - to try and take some
tasks of the critical path of push_mem. The primary change has been to
avoid putting index keys into the gb_tree, and building the KeyChanges
list in parallel to the gb_tree (now known as ObjectTree) within the
Ledger Cache.
Some initial experiments done as to changing the ETS table in the
Penciller now that it will now be used for iterating - but that has been
reverted for now.
Added basic support for 2i query. This involved some refactoring of the
test code to share functions between suites.
There is sill a need for a Part 2 as no tests currently cover removal of
index entries.
Some additional tests following previous refactoring for abstraction,
primarily to make manifest print safer an dprove co-existence of Riak
and non-Riak objects.
The object tag "o" which was taken from eleveldb has been an extended to
allow for specific functions to be triggered for different object types,
in particular when extracting metadata for stroing in the Ledger.
There is now a riak tag (o_rkv@v1), and in theory other tags can be
added and used, as long as their is an appropriate set of functions in
the leveled_codec.
This test exposed two bugs:
- Yet another set of off-by-one errors (really stupidly scanning the
Manifest from Level 1 not Level 0)
- The return of an old issue related to scanning the journal on load
whereby we fail to go back to the previous file before the current SQN
Add iterator support, used initially only for retrieving bucket
statistics.
The iterator is supported by exporting a function, and when the function
is claled it will take a snapshot of the ledger, run the iterator and
hten close the snapshot.
This required a numbe rof underlying changes, in particular to get key
comparison to work as "expected". The code had previously misunderstood
how comparison worked between Erlang terms, and in particular did not
account for tuple length being compared first by size of the tuple (and
not just by each element in order).
An attempt to refactor out more complex code.
The Penciller clerk and Penciller have been re-shaped so that there
relationship is much simpler, and also to make sure that they shut down
much more neatly when the clerk is busy to avoid crashdumps in ct tests.
The CDB now has a binary_mode - so that we don't do binary_to_term twice
... although this may have made things slower ??!!? Perhaps the
is_binary check now required on read is an overhead. Perhaps it is some
other mystery.
There is now a more effiicient fetching of the size on pcl_load now as
well.
This exposed another off-by-one error on startup.
This commit also includes an unsafe change to reply early from a rolling
CDB file (with lots of objects writing the hash table can take too
long). This is bad, but will be resolved through a refactor of the
manifest writing: essentially we deferred writing of the manifest
update which was an unnecessary performance optimisation. If instead we
wait on this, the process is made substantially simpler, and it is safer
to perform the roll of the complete CDB journal asynchronously. If the
manifest update takes too long, an append-only log may be used instead.
Add some initial system tests. This highlighted issues:
- That files deleted by compaction would be left orphaned and not close,
and would not in fact delete (now deleted by closure only)
- There was an issue on stratup that the first few keys in each journal
would not be re-loaded into the ledger