* Add performance/profiling test
Add test to perf_SUITE to do performance tests and also profile different activities in leveled.
This can then be used to highlight functions with unexpectedly high execution times, and prove the impact of changes.
Switch between riak_ctperf and riak_fullperf to change from standard test (with profile option) to full-scale performance test
* Change shape of default perfTest
* Change fullPerf
Change the fullPerf test to run more tests, but with fewer keys.
Given that RS of 512 is being pushed in Riak, 2M objects is till a 300M+ object cluster. 10M >> 1B. so these are still reasonable sizes to test.
A profilePerf test also added to generate all the profiles base don 2M objects.
* Extend test
Queries where previously all returning a large number of index entries - changes made to make number of entries per query more realistic. Also an update process added to show difference between loading and rotating keys.
* Relabel as AAE fold
* Test v5
Test mini-queries - where generally a small number of entries are returned
* Default to ctperf
* Mas i410 looptoclose (#420)
* Stop waiting full SHUTDOWN_PAUSE
If there is a snapshot outstanding at shutdown time, there was a wait of SHUTDOWN_PAUSE to give the snapshot time to close down.
This causes an issue in kv_index_tictactree when rebuilds complete, when an exchange was in flight at the point the rebuild completed - the aae_controller will become blocked for the full shutdown pause, whilst it waits for the replaced key store to be closed.
This change is to loop within the shutdown pause, so that if the snapshot supporting the exchange is closed, the paused bookie can close more quickly (unblocking the controller).
Without this fix, there are intermittent issues in kv_index_tictactree's mockvnode_SUITE tests.
* Address test reliability
Be a bit clearer with waiting round seconds, Was intermittently failing on QR4 previously (but QR5 1s later was always OK).
* Update iterator_SUITE.erl
* Refine test assertion
At Stage C there might be 0 files left, in which case equality with Stage D result is ok.
* Allow snapshots to be reused in queries
Allow for a full bookie snapshot to be re-used for multiple queries, not just KV fetches.
* Reduce log noise
The internal dummy tag is expected so should not prompt a log on reload
* Snapshot should have same status of active db
wrt head_only and head_lookup
* Allow logging to specified on snapshots
* Shutdown snapshot bookie is primary goes down
Inker and Penciller already will shut down based on `erlang:monitor/2`
* Review feedback
Formatting and code readability fixes
* Add compression controls (#417)
* Add compression controls
Add configuration options to allow for a compression algorithm of `none` to disable compression altogether. Also an option to change the point in the LSM tree when compression is applied.
* Handle configurable defaults consistently
Move them into leveled.hrl. This forces double-definitions to be resolved.
There are some other constants in leveled_bookie that are relevant outside of leveled_bookie. These are all now in the non-configurable startup defaults section.
* Clarify referred-to default is OTP not leveled
* Update leveled_bookie.erl
Handle xref issue with eunit include
* Close in stages - waiting for releases
Have a consistent approach to closing the inker and the penciller - so that the close can be interrupted by releasing of snapshots. Then any unreleased snapshots are closed before shutdown - with a 10s pause to give queries a short opportunity to finish.
This should address some issues, primarily seen (but very rarely) in test whereby post-rebuild destruction of parallel AAE keystores cause the crashing of aae_folds.
The primary benefit is to stop an attempt to release a snapshot that has in fact already finished does not cause a crash of the database on normal stop. this was primarily an issue when shutdown is delayed by an ongoing journal compaction job.
* Boost default test budget for EQC
* Update test to use correct type
* Update following review
Avoid filtering out exited PIDs when closing snapshots by catching the exit exception when the Pid is down
* refactor leveled_sst from gen_fsm to gen_statem
* format_status/2 takes State and State Data
but this function is deprecated... put in for backward compatibility
* refactor leveled_cdb from gen_fsm to gen_statem
* disable irrelevant warning ignorer
* Remove unnecessary code paths
Only support messages, especially info messages, where they are possible.
* Mas i1820 offlinedeserialisation cbo (#403)
* Log report GC Info by manifest level
* Hibernate on range query
If Block Index Cache is not full, and we're not yielding
* Spawn to deserialise blocks offline
Hypothesis is that the growth in the heap necessary due to continual term_to_binary calls to deserialise blocks is wasting memory - so do this memory-intensive task in a short-lived process.
* Start with hibernate_after option
* Always build BIC
Testing indicates that the BIC itself is not a primary memory issue - the primary issue is due to a lack of garbage collection and a growing heap.
This change enhances the patch to offline serialisation so that:
- get_sqn & get_kv are standardised to build the BIC, and hibernate when it is built.
- the offline PId is linked to crash this process on failure (as would happen now).
* Standardise spawning for check_block/3
Now deserialise in both parts of the code.
* Only spawn for check_block if cache not full
* Update following review
* Standardise formatting
Make test more reliable. Show no new compaction after third compaction.
* Update comments
---------
Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
* Protect penciller from empty ledger cache updates
which may occur when loading the ledger from the journal, after the ledger has been cleared.
* Score caching and randomisation
The test allkeydelta_journal_multicompact can occasionally fail when a compaction doesn't happen, but then does the next loop. Suspect this is as a result of score caching, randomisation of key grabs for scoring, plus jitter on size boundaries.
Modified test for predictability.
Plus formatting changes
* Avoid small batches
Avoid small batches due to large SQN gaps
* Rationalise tests
Two tests overlaps with the new, much broader, replace_everything/1 test. Ported over any remaining checks of interest and dropped two tests.
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
* Query don't copy
Queries the manifest to avoid copying the whole manifest when taking a snapshot of a penciller to run a query.
Change the logging of fold setup in the Bookie to record the actual snapshot time (rather than the uninteresting and fast returning the the function which will request the snapshot).
A little tidy to avoid duplicating the ?MAX_LEVELS macro.
* Clarify log is of snapshot time not fold time
* Updates after review
* All confirmed deletions to complete when manifest is not lockable
Previously if there was ongoing work (i.e. the clerk had control over the manifest), the penciller could not confirm deletions. Now it may confirm, and defer the required manifest update to a later date (prompted by another delete confirmation request).
* Refactor to update manifest even without on return of manifest
Rather than waiting on next delete confirmation request
* Update src/leveled_pmanifest.erl
Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
* Missing commit
Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
Previously delete_confirmation was blocked on work_ongoing.
However, if the penciller has a work backlog, work_ongoing may be a recurring problem ... and some files, may remain undeleted long after their use - lifetimes for L0 fails in particular have seen to rise from 10-15s to 5m +.
Letting L0 files linger can have a significant impact on memory. In put-heavy tests (e.g. when testing riak-admin transfers) the memory footprint of a riak node has bene observed peaking more than 80% above normal levels, when compared to using this patch.
This PR allows for deletes to be confirmed even when there is work ongoing, by postponing the updating of the manifest until the manifest is next returned from the clerk.
Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
* Don't use fetch_cache below the page_cache level
* Don't time fetches due to SQN checks
SQN checks are all background processes
* Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Add catch for hibernate timeout
* Scale cache_size with level
Based on volume testing. Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels). The cache at lower levels are proportionally much less efficient. so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.
* OTP 24 fix to cherry-pick
* Make minimal change to previous setup
Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.
The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.
* Introduce new types
* Mas i370 d30 sstmemory (#374)
* Don't time fetches due to SQN checks
SQN checks are all background processes
* Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Add catch for hibernate timeout
* Scale cache_size with level
Based on volume testing. Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels). The cache at lower levels are proportionally much less efficient. so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.
* Make minimal change to previous setup
Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.
The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.
* Introduce new types
* More memory management
Clear blockindex_cache on timeout, and manually GC on pclerk after work.
* Add further garbage collection prompt
After fetching level zero, significant change in references in the penciller memory, so prompt a garbage_collect() at this point.
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Double size of L4 files
And double max efficient size of leveled_ebloom
* Revert penciller shape
But expand file size at L3
* More concise version
Following code review
* OTP 24 dialyzer fix
Bindings intended to match - so don't use underscore
* Allow eqc tests to work from `rebar3 as eqc shell`
Then `eqc:quickcheck(leveled_statemeqc:prop_db()).`
Plus markdown tidy
Resolve issue with OTP 22 performance https://github.com/martinsumner/leveled/issues/326 - by changing refernces to loop state.
The test perf_SUITE proves the issue.
OTP 22, without fixes:
Fold pre-close 41209 ms post-close 688 ms
OTP 22, with fixes:
Fold pre-close 401 ms post-close 317 ms
It might be necessary to have a low penciller cache size. however, currently the upper bound of that cache size can be very high, even when a low cache size is set. This is due to the coin tossing done to prevent co-ordination of L0 persistence across parallel instances of leveled.
The aim here is reduce that upper bound, so that any environment having problems due to lack of memory or https://github.com/martinsumner/leveled/issues/326 can more stricly enforce a lower maximum in the penciller cache size
Not within the fold fun of the leveled_runner.
This should avoid constantly having to re-merge and filter the penciller memory when running list_buckets and hitting inactive keys
Change the penciller check so that it returns current/replaced/missing not just true/false.
Reduce unnecessary penciller checks for non-standard keys that will always be retained - and remove redunandt code.
Expand tests of retain and recover to make sure that compaction on delete is well covered.
Also move the SQN number laong during initial loads - to stop aggressive loop to find starting SQN every file.
This is now down on an async message passing loop between the penciller and the new SST file. this way when the penciller it shuts down, and can call close on a L0 file that is awaiting a fetch - rather than be trapped in deadlock.
The deadlock otherwise occurs if a penciller is sent a close immediately after if thas prompted a new level zero.
Make sure there is no change pending regardless of why maybe_roll_memory has been called.
Also, check that the manifest SQN has been incremented before accepting the change.
Conflict here would lead to data loss in the penciller, so extra safety is important.
the Journla snapshot is not a true snapshot, in that the active file in the snapshot can still be taking appends. So when getting a snapshot it is necessary to check if folding over the snapshot that the SQN is <= JournalSQN when the snapshot is taken.
Normally consistency of the snapshot is managed as the operation depends on the penciller, and the penciller *is* a snapshot. Not in this case, as the penciller will return true on a sqn check if the pcl SQN is behind the Journal. So the Journal folder, has been given an additionla check to stop at the JournalSQN.
This is perhaps a fault in the pcl check sqn, which should only return true on an exact match? I'm nervous about changing this though, so we have a less pure fix for now.
A test thta will cause leveled to crash due to a low cache size being set - but protect against this (as well as the general scenario of the cache being full).
There could be a potential case where a L0 file present (post pending) without work backlog being set. In this case we want to roll the level zero to memory, but don't accept the cache update if the L0 cache is already full.
Will not lead to immediate run time changes in SST or CDB logs. These log settings will only change once the new files are re-written.
To completely change the log level - a restart of the store is necessary with new startup options.
this was previously not na issue as leveled_codec:segment_hash/1 would handle anyhting that could be hashed. This now has to be a tuple, and one with a first element - so corrupted tuples are failing.
Add a guard chekcing for a corrupted tuple, but we only need this when doing journal compaction.
Change user_defined keys to be `retain` as a tag strategy
This allows for all fold functions to throw an exception to exit out of a fold with all dependencies still closed down as expected.
This was previously available for key folds, which was necessary for the folds to work in Riak (as max_results in index queries depends one xiting the fold with an exception). This change now adds a ct test, and adds support for head folds, object folds (key order) and object folds (sqn order)