Previously delete_confirmation was blocked on work_ongoing.
However, if the penciller has a work backlog, work_ongoing may be a recurring problem ... and some files, may remain undeleted long after their use - lifetimes for L0 fails in particular have seen to rise from 10-15s to 5m +.
Letting L0 files linger can have a significant impact on memory. In put-heavy tests (e.g. when testing riak-admin transfers) the memory footprint of a riak node has bene observed peaking more than 80% above normal levels, when compared to using this patch.
This PR allows for deletes to be confirmed even when there is work ongoing, by postponing the updating of the manifest until the manifest is next returned from the clerk.
Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
* Don't use fetch_cache below the page_cache level
* Don't time fetches due to SQN checks
SQN checks are all background processes
* Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Add catch for hibernate timeout
* Scale cache_size with level
Based on volume testing. Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels). The cache at lower levels are proportionally much less efficient. so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.
* OTP 24 fix to cherry-pick
* Make minimal change to previous setup
Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.
The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.
* Introduce new types
* Mas i370 d30 sstmemory (#374)
* Don't time fetches due to SQN checks
SQN checks are all background processes
* Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Add catch for hibernate timeout
* Scale cache_size with level
Based on volume testing. Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels). The cache at lower levels are proportionally much less efficient. so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.
* Make minimal change to previous setup
Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.
The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.
* Introduce new types
* More memory management
Clear blockindex_cache on timeout, and manually GC on pclerk after work.
* Add further garbage collection prompt
After fetching level zero, significant change in references in the penciller memory, so prompt a garbage_collect() at this point.
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Amend defaults
3.0.9 testing has used different defaults for cache_size and penciller_cache_size. There has been no noticeable drop in the performance of Riak using these defaults, so adopting here.
The new defaults have a lower memory requirement per vnode, which is useful as recent changes and performance test results are changing the standard recommendations for ring_size.
It is now preferred to choose larger ring_sizes by default (e.g. 256 for 6 - 12 nodes, 512 for 12 - 20 nodes, 1024 for 20 +).
By choosing larger ring sizes, the benefits of scaling up a cluster will continue to make a difference even as the node count goes beyond the recommended "correct" setting. e.g. It might be reasonable (depending on hardware choices) to grow a ring_size=512 cluster to beyond 30 nodes, yet still be relatively efficient and performant at 8 nodes.
* Comment correction
* Resolve hash 0 of confusion (#362)
* Resolve hash 0 of confusion
Ensure that a hash of 0 is not confused with an empty index entry (where both hash and position are 0). For a genuine entry position must always be non-zero.
* Use little endian format directly
... instead of extracting in big_endian format then flipping (or flipping before writing)
* OTP 24 dialyzer fix in test
This may be due to a situation whereby a second confirm_delete has been sent before the sst_deleteconfirmed has been received from the first request.
Now this will now cause inert action rather than crashing.
* Double size of L4 files
And double max efficient size of leveled_ebloom
* Revert penciller shape
But expand file size at L3
* More concise version
Following code review
* OTP 24 dialyzer fix
Bindings intended to match - so don't use underscore
* Allow eqc tests to work from `rebar3 as eqc shell`
Then `eqc:quickcheck(leveled_statemeqc:prop_db()).`
Plus markdown tidy
Currently a sync call is made from cdb to inker wen confirming deletion (checking no snapshots have an outstanding requirement to access the file), whereas an async call is made from sst to penciller to achieve the same thing.
Due to the potential of timeouts and crashes - the cdb/inker delete confirmation process is now based on async message passing, making it consistent with the sst/penciller delete confirmation process.
* Add EQC profile
Don't run EQC tests, unless specifically requested e.g. `./rebar3 as eqc eunit --module=leveled_eqc`
* Remove eqc_cover parse_transform
Causes a compilation issue with the assertException in leveled_runner
* Allow EQC test to compile
EQC only works on OTP 22 for now, but other tests should still work on OTP 22 and OTP 24
* Add more complex statem based eqc test
* Add check for eqc profile
* Address OTP24 warnings, ct and eunit paths
* Reorg to add OTP 24 support
* Update VOLUME.md
* Correct broken refs
* Update README.md
* CI on all main branches
Co-authored-by: Ulf Wiger <ulf@wiger.net>
Resolve issue with OTP 22 performance https://github.com/martinsumner/leveled/issues/326 - by changing refernces to loop state.
The test perf_SUITE proves the issue.
OTP 22, without fixes:
Fold pre-close 41209 ms post-close 688 ms
OTP 22, with fixes:
Fold pre-close 401 ms post-close 317 ms
It might be necessary to have a low penciller cache size. however, currently the upper bound of that cache size can be very high, even when a low cache size is set. This is due to the coin tossing done to prevent co-ordination of L0 persistence across parallel instances of leveled.
The aim here is reduce that upper bound, so that any environment having problems due to lack of memory or https://github.com/martinsumner/leveled/issues/326 can more stricly enforce a lower maximum in the penciller cache size
Not within the fold fun of the leveled_runner.
This should avoid constantly having to re-merge and filter the penciller memory when running list_buckets and hitting inactive keys
In production scale testing, placing te check_modified call on get_kvrange not get_slots made the performance difference.
It should help in get_lots as well, but unable to reliably get coverage in tests with this. So for now, will leave off until a proper test can be constructed which demonstrates any benefits.
When scanning over a leveled store with a helper (e.g. segment filter and last modified date range), applying the filter will speed up the query when the block index cache is available to get_slots.
If it is not available, previously the leveled_sst did not then promote the cache after it had accessed the underlying blocks.
Now the code does this, and also when the cache has all been added, it extracts the largest last modified date so that sst files older than the passed in date can be immediately dismissed
Use the same function to decide for both scoring and compaction - and avoid the situation where somethig is scored for cmpaction, but doesnt change (which was the case previously with tombstones that were still in the ledger).
Move the average towards the current score if not scoring each run. Score from more keys to get a better score (as overheads of scoring are now better sorted by setting score_onein rather than by reducing the sample size).
Potentially reduce the overheads of scoring each file on every run.
The change also alters the default thresholds for compaction to favour longer runs (which will tend towards greater storage efficiency).