When leveled is used with Riak, buckets and keys are always binaries. So we can treat them as such.
Want to move tictac tree testing away from the leveled internal tests, to a set of tests for the Riak scenario. so riak_SUITE created for this and other riak-specific backend tests.
this required a switch to change the sync strategy based on rebar parameter.
However tests could be slow on macbook with OTP16 and sync - so timeouts added in unit tests, and ct tests sync_startegy changed to not sync for OTP16.
Need to know {Bucket, Key} not just Key if all buckets are being covered
by nrt aae. So shoehorning this in - will also allow for proper use of
FilterFun when filtering by partition.
This allows for deleted journals to be retained for a period (the
waste_retnetion_period). The idea being that a backup strategy can
ensure that all journals are backed up, even ones created and removed
from within a backup period - so that any restore pont is possible.
This is also a pre-cursor to removing some of the PromptDelete
complexity from the Inker Clerk - all compactions can prompt deletion as
deletion is now deferred.
Leveled will now signal the need for a pause due to back-pressure, but
not actually pause itself. The hope is that in a riak implementation
this pause can be managed by the put_fsm, and so not lock the store.
Clena the API of Riak specific methods, and also resolve timing issue in
simple_server unit test. Previously this would end up with missing data
(and a lower sequence number after start) because of the penciller_clerk
timeout being relatively large in the context of this test. Now the
timeout has bene reduced the L0 slot is cleared by the time of the
close. To make sure an extra sleep has been added as a precaution to
avoid any intermittent issues.
Busted the hashtable in a Journal file, and demonstrated it can be fixed
by changing the extension name (no need to recover from backup if only
the hashtable is bust)
Changes the stratup otpions to a prolist to make it easier to get
environment variables as default.
Tried application:start - and completely baffled as to how to get this
to work.
Test added for the "retain" recovery strategy. This strategy makes sure
a full history of index changes is made so that if the Ledger is wiped
out, the Ledger cna be fully rebuilt from the Journal.
This exposed two journal compaction problems
- The BestRun selected did not have the source files correctly sorted in
order before compaction
- The compaction process incorrectly dealt with the KeyDelta object
left after a compaction - i.e. compacting twice the same key caused that
key history to be lost.
These issues have now been corrected.
The CDB file management server has distinct states, and was growing case
logic to prevent certain messages from being handled in ceratin states,
and to handle different messages differently. So this has now been
converted to a gen_fsm.
As part of resolving this, the space_clear_ondelete test has been
completed, and completing this revealed that the Penciller could not
cope with a change which emptied the ledger. So a series of changes has
been handled to allow it to smoothly progress to an empty manifest.
Further progress towards the tidying up of basement tombstones in the
Ledger, with support added for key-listing to help with testing (and as
a potentially required feature).
The test is incomplete, but committing at this stage as the last commit
broke some tests (within the test code).
There are some outstanding questions about the handling of tombstones in
the Journal during compaction. There exists a condition whereby values
could return if a recent journal is compacted and tombstones are removed
(as they are no longer present), but older journals have not been
compacted. Now on stop/start - if the Ledger is wiped the removal of
the keys will be forgotten but the original PUTs would still remain.
The safest thing maybe to have rule that tombstones are never deleted
from the Inker's Journal - and accept the build-up of garbage. Or there
could be an addition to the compaction process that checks back through
all the inker files to check that the Key of a tombstone is not present
in the past, before it is removed in the compaction.
There was a test that failed to close down a bookie and that caused some
issues. The issues are double-reoslved, the close down was tidied as
well as the forgotten close being added back in.
There is some generla tidy around in anticipation of TTL support.
Prepare SFT files for handling tombstones correctly (without expiry
dates).
Also some work as it can be seen from tests that some SFT files ar enot
be cleared out correctly. Pausing before trying t clear out the fles to
experiment and trial the possibility that there is a timing issue.
Recent fixes have been made to problems associated with rapidly changing
objexts especially on re-opening of the bookie. Test of rotating
objects from both an index query and a fetch perspective added to better
detect such issues in the future.
To try and improve performance index entries had been removed from the
Ledger Cache, and a shadow list of the LedgerCache (in SQN order) was
kept to avoid gb_trees:to_list on push_mem.
This did not go well. The issue was that ets does not deal with
duplicate keys in the list when inserting (it will only insert one, but
it is not clear which one).
This has been reverted back out.
The ETS parameters have been changed to [set, private]. It is not used
as an iterator, and is no longer passed out of the process (the
memtable_copy is sent instead). This also avoids the tab2list function
being called.
Added basic support for 2i query. This involved some refactoring of the
test code to share functions between suites.
There is sill a need for a Part 2 as no tests currently cover removal of
index entries.