Confirm it results in many more files, if the slot count reduced. Has to handle the fact that Level 0 file has unlimited slots regardless of number of slots configured
A test thta will cause leveled to crash due to a low cache size being set - but protect against this (as well as the general scenario of the cache being full).
There could be a potential case where a L0 file present (post pending) without work backlog being set. In this case we want to roll the level zero to memory, but don't accept the cache update if the L0 cache is already full.
Will not lead to immediate run time changes in SST or CDB logs. These log settings will only change once the new files are re-written.
To completely change the log level - a restart of the store is necessary with new startup options.
More obvious how to extend the code as it is all in one module.
Also add a new field to the standard object metadata tuple that may hold in the future other object metadata base don user-defined functions.
Both log level and forced_logs. Allows for log_level to be changed at startup ad runtime. Also allow for a list of forced logs, so if log_level is set > info, individual info logs can be forced to be seen (such as to see stats logs).
As the fold functions have been added to get_runner in an ad hoc way,
naturally, given the ongoing development of levelEd to support Riak,
it was difficult for a new user (in this case Quviq) to see what folds
are supported, and with what arguments, and expectations.
This PR is for discussion. It is one of many ways to group, spec, and
document the fold functions.
A test is also added for coverage of range queries.
Test takes a long time due to sleep (still need to work on that), but also FoldKeysFun uses ++ rather than [{B, K}|Acc] to extend the list. Order of magnitude speed-up for these queries by changing the way this accumulates
Compression can be switched between LZ4 and zlib (native).
The setting to determine if compression should happen on receipt is now a macro definition in leveled_codec.
Discovered a bug with search ranges in leveled_tree - this was uncovered by an intermittently fialing 19.3 test.
Test case added and bug fixed. It was due to a fialure to use end_key passed causing issues with particular manifests and full bucket ranges.
Introduce a dedicated module for all the different fold types. Also simplify the list of folders by deprecating those folds that should eb achieveable by fold_heads/fold_objects type folds but with smarter functions.
Makes sure that the fold functiosn also have better spec coverage, and are dialyzer checked.
As descibed in https://github.com/martinsumner/leveled/issues/92
Only the first fix was made.
Just to eb safe - archiving means renaming to another file with a different extension. Assumption is that renamed files cna be manually reaped if necessary.
Obviously got totally messed up and confused when testing previous
commits.
Multiple tests were failing for a change which got merged in as the
tests were not reflecting the required API.
Need to allow specific settings to be passed into unit tests.
Also, too much journal compaction may lead to intermittent failures on
the basic_SUITE space_clear_on_delete test. think this is because
there are less “deletes” to reload in on startup to trigger the cascade
down and clear up?
fold objects which snaps in the fold was implemented incorrectly - it
took information from the LedgeCache at the point of the request, not
at the point of the fold. So the LedgerCache SQN may have been
surpassed in the Penciller by the time the fold was called.
When rolling we already know the last_key - no need to seek for it on
startup.
The time it takes for this seek needs to be considered with regards to
startup time. Can we do without knowing lastkey?
This allows for deleted journals to be retained for a period (the
waste_retnetion_period). The idea being that a backup strategy can
ensure that all journals are backed up, even ones created and removed
from within a backup period - so that any restore pont is possible.
This is also a pre-cursor to removing some of the PromptDelete
complexity from the Inker Clerk - all compactions can prompt deletion as
deletion is now deferred.
Leveled will now signal the need for a pause due to back-pressure, but
not actually pause itself. The hope is that in a riak implementation
this pause can be managed by the put_fsm, and so not lock the store.
Clena the API of Riak specific methods, and also resolve timing issue in
simple_server unit test. Previously this would end up with missing data
(and a lower sequence number after start) because of the penciller_clerk
timeout being relatively large in the context of this test. Now the
timeout has bene reduced the L0 slot is cleared by the time of the
close. To make sure an extra sleep has been added as a precaution to
avoid any intermittent issues.
Added a test of journal compaction with a registered snapshot and it
showed that the deleting of files did not correctly check the list of
registerd snapshots. Corrected.