SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Amend defaults
3.0.9 testing has used different defaults for cache_size and penciller_cache_size. There has been no noticeable drop in the performance of Riak using these defaults, so adopting here.
The new defaults have a lower memory requirement per vnode, which is useful as recent changes and performance test results are changing the standard recommendations for ring_size.
It is now preferred to choose larger ring_sizes by default (e.g. 256 for 6 - 12 nodes, 512 for 12 - 20 nodes, 1024 for 20 +).
By choosing larger ring sizes, the benefits of scaling up a cluster will continue to make a difference even as the node count goes beyond the recommended "correct" setting. e.g. It might be reasonable (depending on hardware choices) to grow a ring_size=512 cluster to beyond 30 nodes, yet still be relatively efficient and performant at 8 nodes.
* Comment correction
* Resolve hash 0 of confusion (#362)
* Resolve hash 0 of confusion
Ensure that a hash of 0 is not confused with an empty index entry (where both hash and position are 0). For a genuine entry position must always be non-zero.
* Use little endian format directly
... instead of extracting in big_endian format then flipping (or flipping before writing)
* OTP 24 dialyzer fix in test
This may be due to a situation whereby a second confirm_delete has been sent before the sst_deleteconfirmed has been received from the first request.
Now this will now cause inert action rather than crashing.
* Double size of L4 files
And double max efficient size of leveled_ebloom
* Revert penciller shape
But expand file size at L3
* More concise version
Following code review
* OTP 24 dialyzer fix
Bindings intended to match - so don't use underscore
* Allow eqc tests to work from `rebar3 as eqc shell`
Then `eqc:quickcheck(leveled_statemeqc:prop_db()).`
Plus markdown tidy
Currently a sync call is made from cdb to inker wen confirming deletion (checking no snapshots have an outstanding requirement to access the file), whereas an async call is made from sst to penciller to achieve the same thing.
Due to the potential of timeouts and crashes - the cdb/inker delete confirmation process is now based on async message passing, making it consistent with the sst/penciller delete confirmation process.
* Add EQC profile
Don't run EQC tests, unless specifically requested e.g. `./rebar3 as eqc eunit --module=leveled_eqc`
* Remove eqc_cover parse_transform
Causes a compilation issue with the assertException in leveled_runner
* Allow EQC test to compile
EQC only works on OTP 22 for now, but other tests should still work on OTP 22 and OTP 24
* Add more complex statem based eqc test
* Add check for eqc profile
* Address OTP24 warnings, ct and eunit paths
* Reorg to add OTP 24 support
* Update VOLUME.md
* Correct broken refs
* Update README.md
* CI on all main branches
Co-authored-by: Ulf Wiger <ulf@wiger.net>
Resolve issue with OTP 22 performance https://github.com/martinsumner/leveled/issues/326 - by changing refernces to loop state.
The test perf_SUITE proves the issue.
OTP 22, without fixes:
Fold pre-close 41209 ms post-close 688 ms
OTP 22, with fixes:
Fold pre-close 401 ms post-close 317 ms
It might be necessary to have a low penciller cache size. however, currently the upper bound of that cache size can be very high, even when a low cache size is set. This is due to the coin tossing done to prevent co-ordination of L0 persistence across parallel instances of leveled.
The aim here is reduce that upper bound, so that any environment having problems due to lack of memory or https://github.com/martinsumner/leveled/issues/326 can more stricly enforce a lower maximum in the penciller cache size
Not within the fold fun of the leveled_runner.
This should avoid constantly having to re-merge and filter the penciller memory when running list_buckets and hitting inactive keys
In production scale testing, placing te check_modified call on get_kvrange not get_slots made the performance difference.
It should help in get_lots as well, but unable to reliably get coverage in tests with this. So for now, will leave off until a proper test can be constructed which demonstrates any benefits.
When scanning over a leveled store with a helper (e.g. segment filter and last modified date range), applying the filter will speed up the query when the block index cache is available to get_slots.
If it is not available, previously the leveled_sst did not then promote the cache after it had accessed the underlying blocks.
Now the code does this, and also when the cache has all been added, it extracts the largest last modified date so that sst files older than the passed in date can be immediately dismissed
Use the same function to decide for both scoring and compaction - and avoid the situation where somethig is scored for cmpaction, but doesnt change (which was the case previously with tombstones that were still in the ledger).
Move the average towards the current score if not scoring each run. Score from more keys to get a better score (as overheads of scoring are now better sorted by setting score_onein rather than by reducing the sample size).
Potentially reduce the overheads of scoring each file on every run.
The change also alters the default thresholds for compaction to favour longer runs (which will tend towards greater storage efficiency).