Commit graph

1601 commits

Author SHA1 Message Date
Martin Sumner
314a4a9b07 Update timings - not replace (#392)
So that sample period will not be reset every update
2023-01-26 21:47:42 +00:00
Martin Sumner
a01c74f268 Mas i389 rebuildledger (#390)
* Protect penciller from empty ledger cache updates

which may occur when loading the ledger from the journal, after the ledger has been cleared.

* Score caching and randomisation

The test allkeydelta_journal_multicompact can occasionally fail when a compaction doesn't happen, but then does the next loop.  Suspect this is as a result of score caching, randomisation of key grabs for scoring, plus jitter on size boundaries.

Modified test for predictability.

Plus formatting changes

* Avoid small batches

Avoid small batches due to large SQN gaps

* Rationalise tests

Two tests overlaps with the new, much broader, replace_everything/1 test.  Ported over any remaining checks of interest and dropped two tests.
2023-01-18 11:45:10 +00:00
Martin Sumner
a033e280e6
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)

* Refactor penciller memory

In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:

TimeBucket	PatchA
a.0ms_to_1ms	18554
b.1ms_to_2ms	51778
c.2ms_to_3ms	696
d.3ms_to_5ms	220
e.5ms_to_8ms	59
f.8ms_to_13ms	40
g.13ms_to_21ms	364
h.21ms_to_34ms	277
i.34ms_to_55ms	34
j.55ms_to_89ms	17
k.89ms_to_144ms	21
l.144ms_to_233ms	31
m.233ms_to_377ms	45
n.377ms_to_610ms	52
o.610ms_to_987ms	59
p.987ms_to_1597ms	55
q.1597ms_to_2684ms	54
r.2684ms_to_4281ms	29
s.4281ms_to_6965ms	7
t.6295ms_to_11246ms	1

It is unclear why this varies so much.  The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable.  There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.

The refactoring here is two-fold:

- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;

- change the update to the index to be a simple extension of a list, rather than any conversion.

This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check.  Anticipation is that it may be more efficient in terms of memory changes.

* Compress SST index

Reduces the size of the leveled_sst index with two changes:

1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.

2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4

* Immediate hibernate

Reasons for delay in hibernate were not clear.

Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now.  This also means the log PC023 provides more accurate information.

* Refactor BIC

This patch avoids the following:

- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET

- Stops re-reading of all elements to discover high modified date

Also there appears to have been a bug where a missing HMD for the file is required to add to the cache.  However, now the cache may be erased without erasing the HMD.  This means that the cache can never be rebuilt

* Use correct size in test results

erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes

* Don't change summary record

As it is persisted as part of the file write, any change to the summary record cannot be rolled back

* Clerk to prompt L0 write

Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.

The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence.  the penciller can therefore make use of dead time this way

* Add push on journal compact

If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed.  Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.

This is only relevant to riak users with very off/full batch type workloads.

* Extend tests

To more consistently trigger all overload scenarios

* Fix range keys smaller than prefix

Can't make end key an empty binary  in this case, as it may be bigger than any keys within the range, but will appear to be smaller.

Unit tests and ct tests added to expose the potential issue

* Tidy-up

- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used

* Tidy-up

Remove pre-otp20 references.

Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.

Resolve failure to run on otp20 due to `-if` sttaement

* Tidy up

Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.

There were issues with coverage in eunit tests as the leveled_pclerk shut down.  This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.

* Remove R16 relic

* Further testing another issue

The StartKey must always be less than or equal to the prefix when the first N characters are stripped,  but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.

If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.

* Fix unit test

Unit test had a typo - and result interpretation had a misunderstanding.

* Code and spec tidy

Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.

This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree.  However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.

* Hibernate on BIC complete

There are three situations when the BIC becomes complete:

- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.

In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.

Previously on the the first base was covered.  Now all three are covered through the bic_complete message.

* Test all index keys have same term

This works functionally, but is not optimised (the term is replicated in the index)

* Summaries with same index term

If the summary index all have the same index term - only the object keys need to be indexes

* Simplify case statements

We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null

* OK for M == N

If M = N for the first key, it will have a suffix of <<>>.  This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.

If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).

The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues.  Now the prefix is being checked, then M == N is ok.

* Simplify

Correct the test to use a binary field in the range.

To avoid further issue, only apply filter when everything is a binary() type.

* Add test for head_only mode

When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries.  Double check they are handled as expected like object keys

* Revert previous change - must support typed buckets

Add assertion to confirm worthwhile optimisation

* Add support for configurable cache multiple (#375)

* Mas i370 patch e (#385)

Improvement to monitoring for efficiency and improved readability of logs and stats.

As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).

No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
Martin Sumner
d09f5c778b Query don't copy (#380)
* Query don't copy

Queries the manifest to avoid copying the whole manifest when taking a snapshot of a penciller to run a query.

Change the logging of fold setup in the Bookie to record the actual snapshot time (rather than the uninteresting and fast returning the the function which will request the snapshot).

A little tidy to avoid duplicating the ?MAX_LEVELS macro.

* Clarify log is of snapshot time not fold time

* Updates after review
2022-10-11 13:59:43 +01:00
Martin Sumner
28d3701f6e Mas i370 deletepending (#378)
* All confirmed deletions to complete when manifest is not lockable

Previously if there was ongoing work (i.e. the clerk had control over the manifest), the penciller could not confirm deletions.  Now it may confirm, and defer the required manifest update to a later date (prompted by another delete confirmation request).

* Refactor to update manifest even without on return of manifest

Rather than waiting on next delete confirmation request

* Update src/leveled_pmanifest.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Missing commit

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2022-05-24 10:05:16 +01:00
Martin Sumner
234e0066e8 Mas i370 deletepending (#377)
Previously delete_confirmation was blocked on work_ongoing.

However, if the penciller has a work backlog, work_ongoing may be a recurring problem ... and some files, may remain undeleted long after their use - lifetimes for L0 fails in particular have seen to rise from 10-15s to 5m +.

Letting L0 files linger can have a significant impact on memory. In put-heavy tests (e.g. when testing riak-admin transfers) the memory footprint of a riak node has bene observed peaking more than 80% above normal levels, when compared to using this patch.

This PR allows for deletes to be confirmed even when there is work ongoing, by postponing the updating of the manifest until the manifest is next returned from the clerk.

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2022-05-24 10:04:55 +01:00
Martin Sumner
f8485210ed
Mas i370 d31 sstmemory (#373)
* Don't use fetch_cache below the page_cache level

* Don't time fetches due to SQN checks

SQN checks are all background processes

* Hibernate on SQN check

SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.

* Add catch for hibernate timeout

* Scale cache_size with level

Based on volume testing.  Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels).  The cache at lower levels are proportionally much less efficient.  so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.

* OTP 24 fix to cherry-pick

* Make minimal change to previous setup

Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.

The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.

* Introduce new types

* Mas i370 d30 sstmemory (#374)


* Don't time fetches due to SQN checks

SQN checks are all background processes

* Hibernate on SQN check

SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.

* Add catch for hibernate timeout

* Scale cache_size with level

Based on volume testing.  Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels).  The cache at lower levels are proportionally much less efficient.  so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.

* Make minimal change to previous setup

Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.

The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.

* Introduce new types

* More memory management

Clear blockindex_cache on timeout, and manually GC on pclerk after work.

* Add further garbage collection prompt

After fetching level zero, significant change in references in the penciller memory, so prompt a garbage_collect() at this point.
2022-04-23 13:38:20 +01:00
Martin Sumner
75edb7293d Revert "Don't use fetch_cache below the page_cache level"
This reverts commit 656900e9ec.
2022-03-11 11:07:01 +00:00
Martin Sumner
5eae8e441f Revert "Don't time fetches due to SQN checks"
This reverts commit fb490b9af7.
2022-03-11 11:06:58 +00:00
Martin Sumner
2e0b20a071 Revert "Hibernate on SQN check"
This reverts commit eedd09a23d.
2022-03-11 11:06:51 +00:00
Martin Sumner
eedd09a23d Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.
2022-03-11 08:49:56 +00:00
Martin Sumner
fb490b9af7 Don't time fetches due to SQN checks
SQN checks are all background processes
2022-03-11 08:49:48 +00:00
Martin Sumner
656900e9ec Don't use fetch_cache below the page_cache level 2022-03-11 08:49:29 +00:00
Martin Sumner
79e0af27f6 otp25 support 2022-02-17 15:22:50 +00:00
Martin Sumner
8c4de27789 Fix spec for book_hotbackup/1
Returned function requires a backup path
2021-11-10 10:52:19 +00:00
Martin Sumner
82cc76c8c6 Tidy up some undefined functions in dialyzer specs (#368) 2021-11-05 09:16:34 +00:00
Martin Sumner
f5ba2fc1b9 Amend defaults (#367)
* Amend defaults

3.0.9 testing has used different defaults for cache_size and penciller_cache_size.  There has been no noticeable drop in the performance of Riak using these defaults, so adopting here.

The new defaults have a lower memory requirement per vnode, which is useful as recent changes and performance test results are changing the standard recommendations for ring_size.

It is now preferred to choose larger ring_sizes by default (e.g. 256  for 6 - 12 nodes, 512 for 12 - 20 nodes, 1024 for 20 +).

By choosing larger ring sizes, the benefits of scaling up a cluster will continue to make a difference even as the node count goes beyond the recommended "correct" setting.  e.g. It might be reasonable (depending on hardware choices) to grow a ring_size=512 cluster to beyond 30 nodes, yet still be relatively efficient and performant at 8 nodes.

* Comment correction
2021-11-01 19:30:20 +00:00
Martin Sumner
27cf6bef55
Mas i361 d31 (#365)
* Resolve hash 0 of confusion (#362)

* Resolve hash 0 of confusion

Ensure that a hash of 0 is not confused with an empty index entry (where both hash and position are 0).  For a genuine entry position must always be non-zero.

* Use little endian format directly

... instead of extracting in big_endian format then flipping (or flipping before writing)

* OTP 24 dialyzer fix in test
2021-10-27 14:04:08 +01:00
Martin Sumner
43ec9a4eab
Mas i363 d31 (#364)
* Reduce size of state on terminate (#363)

Otherwise large volume of keys will be written on failure of the process

* Add format_status to leveled_sst
2021-10-27 13:42:53 +01:00
Martin Sumner
beb8ba14a5
Handle a Filename not being present in pending_deletes (#357)
This may be due to a situation whereby a second confirm_delete has been sent before the sst_deleteconfirmed has been received from the first request.

Now this will now cause inert action rather than crashing.
2021-10-18 11:06:52 +01:00
Martin Sumner
70ebb62a61 Search the loader's mock cache .. (#354)
... in the correct direction - otherwise frequently updated objects may not be indexed correctly on reload.
2021-10-04 13:34:29 +01:00
Martin Sumner
4ec8d3e25c Commit selective_sync 2021-10-01 01:51:59 +01:00
Martin Sumner
4ec9d14019 Expose 8 arity-put
Required for selective sync
2021-09-30 22:10:19 +01:00
Martin Sumner
a0e9ac737c
Mas i340 doublel3 d31 (#347)
* Double size of L4 files

And double max efficient size of leveled_ebloom

* Revert penciller shape

But expand file size at L3

* More concise version

Following code review

* OTP 24 dialyzer fix

Bindings intended to match - so don't use underscore

* Allow eqc tests to work from `rebar3 as eqc shell`

Then `eqc:quickcheck(leveled_statemeqc:prop_db()).`

Plus markdown tidy
2021-08-23 17:18:45 +01:00
Martin Sumner
507bf63e22
Address issue with comment (#349)
https://github.com/martinsumner/leveled/issues/269
2021-08-10 13:11:09 +01:00
Martin Sumner
ec52b16cfd
Make deletion confirmation an async message (#344)
Currently a sync call is made from cdb to inker wen confirming deletion (checking no snapshots have an outstanding requirement to access the file), whereas an async call is made from sst to penciller to achieve the same thing.

Due to the potential of timeouts and crashes - the cdb/inker delete confirmation process is now based on async message passing, making it consistent with the sst/penciller delete confirmation process.
2021-07-27 17:06:07 +01:00
Martin Sumner
2cb87f3a6e
Develop 3.1 eqc (#339)
* Add EQC profile

Don't run EQC tests, unless specifically requested e.g. `./rebar3 as eqc eunit --module=leveled_eqc`

* Remove eqc_cover parse_transform

Causes a compilation issue with the assertException in leveled_runner

* Allow EQC test to compile

EQC only works on OTP 22 for now, but other tests should still work on OTP 22 and OTP 24

* Add more complex statem based eqc test

* Add check for eqc profile
2021-05-26 17:40:09 +01:00
Martin Sumner
46ebc6f8ff
Update CI Badge (#337)
* Update README.md

* Update README.md

* Update README.md

* Update README.md
2021-05-25 14:26:47 +01:00
Martin Sumner
ed0301e2cf
Mas i335 otp24 (#336)
* Address OTP24 warnings, ct and eunit paths

* Reorg to add OTP 24 support

* Update VOLUME.md

* Correct broken refs

* Update README.md

* CI on all main branches

Co-authored-by: Ulf Wiger <ulf@wiger.net>
2021-05-25 13:41:20 +01:00
Martin Sumner
2c53c0a85a
Merge pull request #332 from martinsumner/mas-i331-gha
Switch to GHA
2021-03-05 15:10:24 +00:00
Martin Sumner
f726f81bde Switch to GHA 2021-03-05 14:47:42 +00:00
Martin Sumner
489d89fb16
Merge pull request #330 from martinsumner/mas-i329-lz4dep
Add lz4 dep
2021-03-03 13:54:08 +00:00
Martin Sumner
d8be8d4ace Add lz4 dep 2021-03-02 14:21:13 +00:00
Martin Sumner
71ba64ea53
Merge pull request #328 from martinsumner/mas-i326-loopstateref
Change references to loop state records
2021-01-11 13:26:14 +00:00
Martin Sumner
9157de680e Change refernces to loop state records
Resolve issue with OTP 22 performance https://github.com/martinsumner/leveled/issues/326 - by changing refernces to loop state.

The test perf_SUITE proves the issue.

OTP 22, without fixes:

Fold pre-close 41209 ms post-close 688 ms

OTP 22, with fixes:

Fold pre-close 401 ms post-close 317 ms
2021-01-11 10:39:34 +00:00
Martin Sumner
142efbcee6
Merge pull request #327 from martinsumner/mas-i326-controlpcache
Allow lower penciller cache sizes to be enforced
2020-12-22 12:34:53 +00:00
Martin Sumner
b8d71023a8 Allow lower penciller cache sizes to be enforced
It might be necessary to have a low penciller cache size.  however, currently the upper bound of that cache size can be very high, even when a low cache size is set.  This is due to the coin tossing done to prevent co-ordination of L0 persistence across parallel instances of leveled.

The aim here is reduce that upper bound, so that any environment having problems due to lack of memory or https://github.com/martinsumner/leveled/issues/326 can more stricly enforce a lower maximum in the penciller cache size
2020-12-22 12:34:01 +00:00
Martin Sumner
182395ee00 Schema clarifications 2020-12-09 09:18:38 +00:00
Martin Sumner
186e3868a9
Merge pull request #325 from martinsumner/mas-i321-checkactive
Mas i321 checkactive
2020-12-08 16:33:05 +00:00
Martin Sumner
3b305e0adb
Merge pull request #320 from martinsumner/mas-i319-cachescores
Allow for caching of compaction scores
2020-12-07 00:00:40 +00:00
Martin Sumner
eeeb7498c0 Review feedback
Also add types to try and help avoid further confusion
2020-12-04 19:40:28 +00:00
Martin Sumner
7bf67563ef
Update src/leveled_iclerk.erl
Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2020-12-04 14:31:47 +00:00
Martin Sumner
108527a8d9 is_active check within fold
Not within the fold fun of the leveled_runner.

This should avoid constantly having to re-merge and filter the penciller memory when running list_buckets and hitting inactive keys
2020-12-04 12:49:17 +00:00
Martin Sumner
f3f574de02 Switch to checking on get_kvrange
In production scale testing, placing te check_modified call on get_kvrange not get_slots made the performance difference.

It should help in get_lots as well, but unable to reliably get coverage in tests with this.  So for now, will leave off until a proper test can be constructed which demonstrates any benefits.
2020-12-03 13:37:22 +00:00
Martin Sumner
a210aa6846 Promote cache when scanning
When scanning over a leveled store with a helper (e.g. segment filter and last modified date range), applying the filter will speed up the query when the block index cache is available to get_slots.

If it is not available, previously the leveled_sst did not then promote the cache after it had accessed the underlying blocks.

Now the code does this, and also when the cache has all been added, it extracts the largest last modified date so that sst files older than the passed in date can be immediately dismissed
2020-12-02 13:29:50 +00:00
Martin Sumner
80e6920d6c Standardise retention decision
Use the same function to decide for both scoring and compaction - and avoid the situation where somethig is scored for cmpaction, but doesnt change (which was the case previously with tombstones that were still in the ledger).
2020-11-29 15:43:29 +00:00
Martin Sumner
00823584ec Improve the quality of score
Move the average towards the current score if not scoring each run.   Score from more keys to get a better score (as overheads of scoring are now better sorted by setting score_onein rather than by reducing the sample size).
2020-11-27 20:03:44 +00:00
Martin Sumner
bcc331da10 Set max limit of 24 hours on cached score 2020-11-27 13:56:47 +00:00
Martin Sumner
be562c85cb Don't hide option 2020-11-27 03:12:35 +00:00
Martin Sumner
0690136ab2 Clarify how the new option will be controlled in Riak 2020-11-27 03:01:38 +00:00