* Add performance/profiling test
Add test to perf_SUITE to do performance tests and also profile different activities in leveled.
This can then be used to highlight functions with unexpectedly high execution times, and prove the impact of changes.
Switch between riak_ctperf and riak_fullperf to change from standard test (with profile option) to full-scale performance test
* Change shape of default perfTest
* Change fullPerf
Change the fullPerf test to run more tests, but with fewer keys.
Given that RS of 512 is being pushed in Riak, 2M objects is till a 300M+ object cluster. 10M >> 1B. so these are still reasonable sizes to test.
A profilePerf test also added to generate all the profiles base don 2M objects.
* Extend test
Queries where previously all returning a large number of index entries - changes made to make number of entries per query more realistic. Also an update process added to show difference between loading and rotating keys.
* Relabel as AAE fold
* Test v5
Test mini-queries - where generally a small number of entries are returned
* Default to ctperf
* Mas i410 looptoclose (#420)
* Stop waiting full SHUTDOWN_PAUSE
If there is a snapshot outstanding at shutdown time, there was a wait of SHUTDOWN_PAUSE to give the snapshot time to close down.
This causes an issue in kv_index_tictactree when rebuilds complete, when an exchange was in flight at the point the rebuild completed - the aae_controller will become blocked for the full shutdown pause, whilst it waits for the replaced key store to be closed.
This change is to loop within the shutdown pause, so that if the snapshot supporting the exchange is closed, the paused bookie can close more quickly (unblocking the controller).
Without this fix, there are intermittent issues in kv_index_tictactree's mockvnode_SUITE tests.
* Address test reliability
Be a bit clearer with waiting round seconds, Was intermittently failing on QR4 previously (but QR5 1s later was always OK).
* Update iterator_SUITE.erl
* Refine test assertion
At Stage C there might be 0 files left, in which case equality with Stage D result is ok.
* Allow snapshots to be reused in queries
Allow for a full bookie snapshot to be re-used for multiple queries, not just KV fetches.
* Reduce log noise
The internal dummy tag is expected so should not prompt a log on reload
* Snapshot should have same status of active db
wrt head_only and head_lookup
* Allow logging to specified on snapshots
* Shutdown snapshot bookie is primary goes down
Inker and Penciller already will shut down based on `erlang:monitor/2`
* Review feedback
Formatting and code readability fixes
* Add compression controls (#417)
* Add compression controls
Add configuration options to allow for a compression algorithm of `none` to disable compression altogether. Also an option to change the point in the LSM tree when compression is applied.
* Handle configurable defaults consistently
Move them into leveled.hrl. This forces double-definitions to be resolved.
There are some other constants in leveled_bookie that are relevant outside of leveled_bookie. These are all now in the non-configurable startup defaults section.
* Clarify referred-to default is OTP not leveled
* Update leveled_bookie.erl
Handle xref issue with eunit include
* Use backwards compatible term_to_binary
So that where we have hashed term_to_binary output in OTP25 or earlier, that has will be matched in OTP 26.
* Test reliability
If all keys are put in order, the max_slots may not be used, as the driver at L0 is penciller cache size, and merge to new files (managed by the parameter) only occurs when there are overlapping files the level below
* Close in stages - waiting for releases
Have a consistent approach to closing the inker and the penciller - so that the close can be interrupted by releasing of snapshots. Then any unreleased snapshots are closed before shutdown - with a 10s pause to give queries a short opportunity to finish.
This should address some issues, primarily seen (but very rarely) in test whereby post-rebuild destruction of parallel AAE keystores cause the crashing of aae_folds.
The primary benefit is to stop an attempt to release a snapshot that has in fact already finished does not cause a crash of the database on normal stop. this was primarily an issue when shutdown is delayed by an ongoing journal compaction job.
* Boost default test budget for EQC
* Update test to use correct type
* Update following review
Avoid filtering out exited PIDs when closing snapshots by catching the exit exception when the Pid is down
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
Potentially reduce the overheads of scoring each file on every run.
The change also alters the default thresholds for compaction to favour longer runs (which will tend towards greater storage efficiency).
Confirm it results in many more files, if the slot count reduced. Has to handle the fact that Level 0 file has unlimited slots regardless of number of slots configured
A test thta will cause leveled to crash due to a low cache size being set - but protect against this (as well as the general scenario of the cache being full).
There could be a potential case where a L0 file present (post pending) without work backlog being set. In this case we want to roll the level zero to memory, but don't accept the cache update if the L0 cache is already full.
Will not lead to immediate run time changes in SST or CDB logs. These log settings will only change once the new files are re-written.
To completely change the log level - a restart of the store is necessary with new startup options.
More obvious how to extend the code as it is all in one module.
Also add a new field to the standard object metadata tuple that may hold in the future other object metadata base don user-defined functions.
Both log level and forced_logs. Allows for log_level to be changed at startup ad runtime. Also allow for a list of forced logs, so if log_level is set > info, individual info logs can be forced to be seen (such as to see stats logs).
As the fold functions have been added to get_runner in an ad hoc way,
naturally, given the ongoing development of levelEd to support Riak,
it was difficult for a new user (in this case Quviq) to see what folds
are supported, and with what arguments, and expectations.
This PR is for discussion. It is one of many ways to group, spec, and
document the fold functions.
A test is also added for coverage of range queries.
Test takes a long time due to sleep (still need to work on that), but also FoldKeysFun uses ++ rather than [{B, K}|Acc] to extend the list. Order of magnitude speed-up for these queries by changing the way this accumulates
Compression can be switched between LZ4 and zlib (native).
The setting to determine if compression should happen on receipt is now a macro definition in leveled_codec.
Discovered a bug with search ranges in leveled_tree - this was uncovered by an intermittently fialing 19.3 test.
Test case added and bug fixed. It was due to a fialure to use end_key passed causing issues with particular manifests and full bucket ranges.
Introduce a dedicated module for all the different fold types. Also simplify the list of folders by deprecating those folds that should eb achieveable by fold_heads/fold_objects type folds but with smarter functions.
Makes sure that the fold functiosn also have better spec coverage, and are dialyzer checked.
As descibed in https://github.com/martinsumner/leveled/issues/92
Only the first fix was made.
Just to eb safe - archiving means renaming to another file with a different extension. Assumption is that renamed files cna be manually reaped if necessary.
Obviously got totally messed up and confused when testing previous
commits.
Multiple tests were failing for a change which got merged in as the
tests were not reflecting the required API.
Need to allow specific settings to be passed into unit tests.
Also, too much journal compaction may lead to intermittent failures on
the basic_SUITE space_clear_on_delete test. think this is because
there are less “deletes” to reload in on startup to trigger the cascade
down and clear up?
fold objects which snaps in the fold was implemented incorrectly - it
took information from the LedgeCache at the point of the request, not
at the point of the fold. So the LedgerCache SQN may have been
surpassed in the Penciller by the time the fold was called.
When rolling we already know the last_key - no need to seek for it on
startup.
The time it takes for this seek needs to be considered with regards to
startup time. Can we do without knowing lastkey?