Commit graph

1636 commits

Author SHA1 Message Date
Martin Sumner
69e8b29d1f
Mas d34 leveled.i459 partialmerge (#460)
* Add test to replicate issue 459

Nothing actually crashes due to the issue - but looking at the logs there is the polarised stats associated with the issue.  When  merging into L3, you would normally expect to merge into 4 files - but actually we see FileCounter occasionally spiking.

* Add partial merge support

There is a `max_mergebelow` size which can be a positive integer, or infinity.  It defaults to 32.

If a merge from Level N covers less than `max_mergebelow` files in level N + 1 - the merge will proceesd as before.  If it has >= `max_mergebelow`, the merge will be curtailed when `max_mergebelow div 2` files have been created at that level.  The remainder for Level N will then be written, as well as for Level N + 1 up to the next whole file that has no yet been touched by the merge.

The backlog that prompted the merge will still exist - as the files in Level N have not been changed.  However, it is likely the next file picked will not be the same one, and will in probability have a lower number of files to merge (as the average is =< 8).

This will stop progress from being halted by long merge jobs, as they will exit out in a safe way after partial completion.  In the case where the majority of files covered  do not require a merge, then those files will be skipped the next time the remainder file is picked up for merge at Level N
2024-11-30 13:16:13 +00:00
Martin Sumner
c642575caa
Support sub-key queries (#457)
* Support sub-key queries

Also requires a refactoring of types.

In head-only mode - the metadata in the ledger is just the value, and the value can be anything.  So metadata() definition needs to reflect that.

There are then issues with appdefined functions for extracting metadata.  In theory an appdefined function could extract some unsopprted type.  So made explicit that the appdefined function must extract std_metadata() as metadata - otherwise functionality will not work.

This means that if it is an object key, that is not a ?HEAD key, then the Metadata must be a tuple (of either Riak or Standard type).

* Fix coverage issues
2024-11-18 14:51:23 +00:00
Martin Sumner
98cdb4d9f2
Make log functions exportable (#455)
* Make log functions exportable

To make it easier to switch to logger in kv_index_tictactree - export the log functions from leveled so that they can be reused

* Changes post review

Added brief description of the module to explain why the approach to logging is used.

* Result of log should be `ok`
2024-11-13 13:56:57 +00:00
Martin Sumner
aaeac7ba36
Mas d34 i453 eqwalizer (#454)
* Add eqwalizer and clear for codec & sst

The eqwalizer errors highlighted the need in several places for type clarification.

Within tests there are some issue where a type is assumed, and so ignore has been used to handle this rather than write more complex code to be explicit about the assumption.

The handling of arrays isn't great by eqwalizer - to be specific about the content of array causes issues when initialising an array.  Perhaps a type (map maybe) where one can be more explicit about types might be a better option (even if there is a minimal performance impact).

The use of a ?TOMB_COUNT defined option complicated the code much more with eqwalizer.  So for now, there is no developer option to disable ?TOMB_COUNT.

Test fixes required where strings have been used for buckets/keys not binaries.

The leveled_sst statem needs a different state record for starting when compared to other modes.  The state record has been divided up to reflect this, to make type management easier.  The impact on performance needs to be tested.

* Update ct tests to support binary keys/buckets only

* Eqwalizer for leveled_cdb and leveled_tictac

As array is used in leveled_tictac - there is the same issue as with leveled_sst

* Remove redundant indirection of leveled_rand

A legacy of pre-20 OTP

* Morde modules eqwalized

ebloom/log/util/monitor

* Eqwalize further modules

elp eqwalize leveled_codec; elp eqwalize leveled_sst; elp eqwalize leveled_cdb; elp eqwalize leveled_tictac; elp eqwalize leveled_log; elp eqwalize leveled_monitor; elp eqwalize leveled_head; elp eqwalize leveled_ebloom; elp eqwalize leveled_iclerk

All concurrently OK

* Refactor unit tests to use binary() no string() in key

Previously string() was allowed just to avoid having to change all these tests.  Go through the pain now, as part of eqwalizing.

* Add fixes for penciller, inker

Add a new ?IS_DEF macro to replace =/= undefined.

Now more explicit about primary, object and query keys

* Further fixes

Need to clarify functions used by runner - where keys , query keys and object keys are used

* Further eqwalisation

* Eqwalize leveled_pmanifest

Also make implementation independent of choice of dict - i.e. one can save a manifest using dict for blooms/pending_deletions and then open a manifest with code that uses a different type.  Allow for slow dict to be replaced with map.

Would not be backwards compatible though, without further thought - i.e. if you upgrade then downgrade.

Redundant code created by leveled_sst refactoring removed.

* Fix backwards compatibility issues

* Manifest Entry to belong to leveled_pmanifest

There are two manifests - leveled_pmanifest and leveled_imanifest.  Both have manifest_entry() type objects, but these types are different.  To avoid confusion don't include the pmanifest manifest_entry() within the global include file - be specific that it belongs to the leveled_pmanifest module

* Ignore elp file - large binary

* Update src/leveled_pmem.erl

Remove unnecessary empty list from type definition

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2024-11-13 13:37:13 +00:00
Martin Sumner
1be55fcd15
Make tree compatible with binary L1 (#451)
The old leveled_tictac had a pure binary L1.  this was slower than the new map version.

However, in a Riak cluster, when running a merge_tree_range during a rolling update, the fold the query coordinator will initiate a tree.  If this tree is not a map-based tree (as that node has not yet been upgraded), then a node that has been upgraded would previously fail the query as it cannot handle a level 1 in a binary form.  This now enables updated nodes to handle both forms of trees.

Obviously, if the coordinating node has been updated non-updated nodes will crash queries as they cannot handle the tree with the map at Level 1.  The aim is to make it configurable to force non-map trees in a cluster, until all nodes have been upgraded.  So as long as each node understands how to update both non-map trees and map-based trees - evrything should be OK.
2024-09-18 10:24:16 +01:00
Martin Sumner
0cc998a7e3 Update README.md 2024-09-06 11:41:05 +01:00
Martin Sumner
54e3096020
Switch to logger (#442)
* Switch to logger

Use logger rather than io:format when logging.  The ct tests have besn switched to log to file, testutil/init_per_suite/1 may offer useful guidance on configuring logger with leveled.

As all logs are produced by the leveled_log module, the MFA metadata is uninteresting for log outputs, but can be used for explicit filter controls for leveled logs.

* iolist_to_binary not unicode_binary()

logger filters will be error and be removed if the format line is a binary().  Must be either a charlist() or a unicode_binary() - so iolist_to_binary() can't be used

* Add metadata for filter

* Update test/end_to_end/tictac_SUITE.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2024-09-06 11:18:24 +01:00
Martin Sumner
5db277b82d
Mas d34 ms.i446 plusplus (#448)
* Minor optimisations

Try to reduce the calls to ++, and ensure that where possible the shorted list is being copied.

* Pass Acc into function

So that the list can be accumulated efficiently, without an additional copy to add back the accumulator at the end.

* prepend to accumulators

Code review to make sure we prepend to accumulators everywhere, to reduce the copying involved.

attempt to further optimise in leveled_sst (where most expensive ++ occurs).  This optimises for the case when Acc is [], and enforces a series of '++' to start from the right, prepending in turn.  Some shell testing indicated that this was not necessarily the case (although this doesn't seem tobe consistently reproducible).

```
6> element(1, timer:tc(fun() -> KL1 ++ KL2 ++ KL3 ++ KL4 end)).
28
7> element(1, timer:tc(fun() -> KL1 ++ KL2 ++ KL3 ++ KL4 end)).
174
8> element(1, timer:tc(fun() -> KL1 ++ KL2 ++ KL3 ++ KL4 end)).
96
9> element(1, timer:tc(fun() -> KL1 ++ KL2 ++ KL3 ++ KL4 end)).
106
10> element(1, timer:tc(fun() -> KL1 ++ KL2 ++ KL3 ++ KL4 end)).
112

17> element(1, timer:tc(fun() -> lists:foldr(fun(KL0, KLAcc) -> KL0 ++ KLAcc end, [], [KL1, KL2, KL3, KL4]) end)).
21
18> element(1, timer:tc(fun() -> lists:foldr(fun(KL0, KLAcc) -> KL0 ++ KLAcc end, [], [KL1, KL2, KL3, KL4]) end)).
17
19> element(1, timer:tc(fun() -> lists:foldr(fun(KL0, KLAcc) -> KL0 ++ KLAcc end, [], [KL1, KL2, KL3, KL4]) end)).
12
20> element(1, timer:tc(fun() -> lists:foldr(fun(KL0, KLAcc) -> KL0 ++ KLAcc end, [], [KL1, KL2, KL3, KL4]) end)).
11
```

running eprof indicates that '++' and lists:reverse have been reduced (however impact had only previously been 1-2%)

* Add unit test to confirm (limited) merit of optimised list function

No difference in unit test with/without inline compilation, so this has been removed

* Update src/leveled_sst.erl

These functions had previously used inline compilation - but this didn't appear to improve performance

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Update src/leveled_sst.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Update src/leveled_ebloom.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Update following review

Also fix code coverage issues

* Update src/leveled_sst.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2024-09-05 15:08:05 +01:00
Martin Sumner
30ec9214ac
Mas i449 directpromptofdeletions (#450)
* Move prompt of deletions to Inker

It is a series of casts, so no reason to offload this to the clerk.  Simplifies potential races in shutdown

* Rename

* Change cache sizes

In the hope of making test more consistent
2024-09-04 09:04:24 +01:00
Martin Sumner
af0f2bb2cf Make tictac more efficient by making level1 a map (#441)
* Make tictac more efficient by making level1 a map

Pre-change (1M keys, tree size large):

Generating Keys took 2513 milliseconds
Memory footprint [{total,356732576},{processes,334051328},{processes_used,334044488},{system,22681248},{atom,540873},{atom_used,524383},{binary,1015120},{code,9692859},{ets,721496}]
Generating new tree took 1 milliseconds
Loading tree took 27967 milliseconds
Memory footprint [{total,36733040},{processes,8875472},{processes_used,8875048},{system,27857568},{atom,540873},{atom_used,524449},{binary,6236480},{code,9692859},{ets,721496}]
Exporting tree took 434 milliseconds
Importing tree took 100 milliseconds
Memory footprint [{total,155941512},{processes,123734808},{processes_used,123734384},{system,32206704},{atom,540873},{atom_used,524449},{binary,10401144},{code,9692859},{ets,721496}]
Garbage collect
Memory footprint [{total,39660504},{processes,8257520},{processes_used,8256968},{system,31402984},{atom,540873},{atom_used,524449},{binary,9781760},{code,9692859},{ets,721496}]

Post change:

Generating Keys took 2416 milliseconds
Memory footprint [{total,284678120},{processes,258349528},{processes_used,257758568},{system,26328592},{atom,893161},{atom_used,878150},{binary,1013880},{code,11770188},{ets,774224}]
Generating new tree took 0 milliseconds
Loading tree took 2072 milliseconds
Memory footprint [{total,49957448},{processes,17244856},{processes_used,16653896},{system,32712592},{atom,893161},{atom_used,878216},{binary,7397496},{code,11770188},{ets,774224}]
Exporting tree took 448 milliseconds
Importing tree took 108 milliseconds
Memory footprint [{total,46504880},{processes,11197344},{processes_used,10606384},{system,35307536},{atom,893161},{atom_used,878216},{binary,9992112},{code,11770188},{ets,774224}]
Garbage collect
Memory footprint [{total,47394048},{processes,12223608},{processes_used,11632520},{system,35170440},{atom,893161},{atom_used,878216},{binary,9855008},{code,11770188},{ets,774224}]

* Tidy-up

* Add type

* Remove ++ requiring copy of Acc

Rely on mechanism producing a sorted result, not sorting

* Update src/leveled_tictac.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Update following review

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2024-09-03 16:35:42 +01:00
Martin Sumner
acf30599e9
Improve perf_SUITE test (#445)
* Improve perf_SUITE test

The update teat is refactored so as not to generate. a large KV list which dominates the memory utilisation.

The update and the get tests changes to do a head before each operation - which emulates how this will work in RIAK.

* Revert default setting change

* Don't pre-calculate key list

For fetches - reduces memory required for test process not database (and consequent distortion to measured results)

* Tidy ++ in tests

Removes some rogue results from profile

* Update testutil.erl

* Test fixes

* Tidy generate_chunk for profiling

* Revert "Tidy generate_chunk for profiling"

This reverts commit 1f6cff446ca6b9855f1e3aa732b32e0e5c14c9a5.

* Resize profile test
2024-09-02 11:17:35 +01:00
Martin Sumner
7b5b18ed06 Update rebar.config 2024-07-16 11:07:54 +01:00
Martin Sumner
e417bb4743 Merge branch 'develop-3.1' into develop-3.4 2024-07-15 21:07:28 +01:00
Martin Sumner
d45356a4f7
Extend perf_SUITE (#434)
* Extend perf_SUITE

This is v6 of the perf_SUITE tests.  The test adds a complex index entry to every object, and then adds a new test phase to test regex queries.

There are three profiles added so the full, mini and profiling versions of perf_SUITE can be run without having to edit the file itself:

e.g. ./rebar3 as perf_mini do ct --suite=test/end_to_end/perf_SUITE

When testing as `perf_prof` summarised versions of the eprof results are now printed to screen.

The volume of keys within the full test suite has been dropped ... just to make life easier so that test run times are not excessively increase by the new features.

* Load chunk in spawned processes

Assume to make the job of gs easier - name makes a massive difference to load time in OTP 24.

* Correctly account for pause

alos try and improve test stability by increasing pause

* Add microstate accounting to profile

* Add memory tracking during test phases

Identify and log out memory usage by test phase

* Use macros instead (#437)

* Don't print memory to screen in standard ct test

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2024-07-15 20:49:21 +01:00
Martin Sumner
7ac99f05c7
Develop 3.1 otp27 (#443)
* Initial tetsing of OTP27

* Profiles for testing in OTP 27
2024-07-15 20:31:00 +01:00
Martin Sumner
86c49bec00 Update erlang.yml 2024-07-11 22:06:31 +01:00
Martin Sumner
d261ce565d Add OTP build 2024-07-11 22:04:40 +01:00
Martin Sumner
f7bf930cdb Update erlang.yml 2024-07-11 22:03:13 +01:00
Martin Sumner
80e8ccb550 Update CI for OTP 24/26 2024-07-11 22:02:34 +01:00
Martin Sumner
f5fed0a1ff
Uplift rebar3 -> 3.18.0 (#431) 2024-02-13 10:59:28 +00:00
Martin Sumner
999ce8ba5b
Add ZSTD compression (#430)
* Add support for zstd and split compression

Add support for using zstd as an alternative to native, lz4.

Upgrade lz4 to v1.9.4 (with ARM enhancements).

Allow for split compression algorithms - i.e. use native on journal, but lz4 on ledger.

* Switch to AdRoll zstd

Development appears to be active and ongoing.  No issues running on different linux flavours.

* Use realistic bucket name

* Update README.md

* Switch branch

* Add comment following review
2024-01-23 16:25:03 +00:00
Martin Sumner
c294570bce
Mas d31 nhskv16sst (#428)
* Add performance/profiling test

Add test to perf_SUITE to do performance tests and also profile different activities in leveled.

This can then be used to highlight functions with unexpectedly high execution times, and prove the impact of changes.

Switch between riak_ctperf and riak_fullperf to change from standard test (with profile option) to full-scale performance test

* Change shape of default perfTest

* Refactor SST

Compare and contrast profile for guess, before and after refactor:

pre

```
lists:map_1/2                                         313370     2.33    32379  [      0.10]

lists:foldl_1/3                                       956590     4.81    66992  [      0.07]

leveled_sst:'-expand_list_by_pointer/5-fun-0-'/4      925020     6.13    85318  [      0.09]

erlang:binary_to_term/1                                 3881     8.55   119012  [     30.67]

erlang:'++'/2                                         974322    11.55   160724  [      0.16]

lists:member/2                                       4000180    15.00   208697  [      0.05]

leveled_sst:find_pos/4                               4029220    21.01   292347  [      0.07]

leveled_sst:member_check/2                           4000000    21.17   294601  [      0.07]

--------------------------------------------------  --------  -------  -------  [----------]

Total:                                              16894665  100.00%  1391759  [      0.08]
```

post

```
lists:map_1/2                                         63800     0.79    6795  [      0.11]

erlang:term_to_binary/1                               15726     0.81    6950  [      0.44]

lists:keyfind/3                                      180967     0.92    7884  [      0.04]

erlang:spawn_link/3                                   15717     1.08    9327  [      0.59]

leveled_sst:'-read_slots/5-fun-1-'/8                  31270     1.15    9895  [      0.32]

gen:do_call/4                                          7881     1.31   11243  [      1.43]

leveled_penciller:find_nextkey/8                     180936     2.01   17293  [      0.10]

prim_file:pread_nif/3                                 15717     3.89   33437  [      2.13]

leveled_sst:find_pos/4                              4028940    17.85  153554  [      0.04]

erlang:binary_to_term/1                               15717    51.97  447048  [     28.44]

--------------------------------------------------  -------  -------  ------  [----------]

Total:                                              6704100  100.00%  860233  [      0.13]

```

* Update leveled_penciller.erl

* Mas d31 nhskv16sstpcl (#426)

Performance updates to leveled:

- Refactoring of pointer expansion when fetching from leveled_sst files to avoid expensive list concatenation.
- Refactoring of leveled_ebloom to make more flexible, reduce code, and improve check time.
- Refactoring of querying within leveled_sst to reduce the number of blocks that need to be de-serialised per query.
- Refactoring of the leveled_penciller's query key comparator, to make use of maps and simplify the filtering.
- General speed-up of frequently called functions.
2024-01-22 21:22:54 +00:00
Martin Sumner
49490c38ef
Add performance/profiling test (#424)
* Add performance/profiling test

Add test to perf_SUITE to do performance tests and also profile different activities in leveled.

This can then be used to highlight functions with unexpectedly high execution times, and prove the impact of changes.

Switch between riak_ctperf and riak_fullperf to change from standard test (with profile option) to full-scale performance test

* Change shape of default perfTest

* Change fullPerf

Change the fullPerf test to run more tests, but with fewer keys.

Given that RS of 512 is being pushed in Riak, 2M objects is till a 300M+ object cluster. 10M >> 1B.  so these are still reasonable sizes to test.

A profilePerf test also added to generate all the profiles base don 2M objects.

* Extend test

Queries where previously all returning a large number of index entries - changes made to make number of entries per query more realistic.  Also an update process added to show difference between loading and rotating keys.

* Relabel as AAE fold

* Test v5

Test mini-queries - where generally a small number of entries are returned

* Default to ctperf
2023-12-19 11:56:03 +00:00
Martin Sumner
9bff70eedb Correct schema typo 2023-11-13 18:25:41 +00:00
Martin Sumner
6223b801f3
Mas d31 i410looptoclose (#421)
* Mas i410 looptoclose (#420)

* Stop waiting full SHUTDOWN_PAUSE

If there is a snapshot outstanding at shutdown time, there was a wait of SHUTDOWN_PAUSE to give the snapshot time to close down.

This causes an issue in kv_index_tictactree when rebuilds complete, when an exchange was in flight at the point the rebuild completed - the aae_controller will become blocked for the full shutdown pause, whilst it waits for the replaced key store to be closed.

This change is to loop within the shutdown pause, so that if the snapshot supporting the exchange is closed, the paused bookie can close more quickly (unblocking the controller).

Without this fix, there are intermittent issues in kv_index_tictactree's mockvnode_SUITE tests.

* Address test reliability

Be a bit clearer with waiting round seconds,  Was intermittently failing on QR4 previously (but QR5 1s later was always OK).

* Update iterator_SUITE.erl

* Refine test assertion

At Stage C there might be 0 files left, in which case equality with Stage D result is ok.
2023-11-10 15:04:47 +00:00
Martin Sumner
d544db5461
Mas d31 i413 (#415)
* Allow snapshots to be reused in queries

Allow for a full bookie snapshot to be re-used for multiple queries, not just KV fetches.

* Reduce log noise

The internal dummy tag is expected so should not prompt a log on reload

* Snapshot should have same status of active db

wrt head_only and head_lookup

* Allow logging to specified on snapshots

* Shutdown snapshot bookie is primary goes down

Inker and Penciller already will shut down based on `erlang:monitor/2`

* Review feedback

Formatting and code readability fixes
2023-11-08 09:18:01 +00:00
Martin Sumner
9e804924a8
Mas d31 i416 (#418)
* Add compression controls (#417)

* Add compression controls

Add configuration options to allow for a compression algorithm of `none` to disable compression altogether.  Also an option to change the point in the LSM tree when compression is applied.

* Handle configurable defaults consistently

Move them into leveled.hrl.  This forces double-definitions to be resolved.

There are some other constants in leveled_bookie that are relevant outside of leveled_bookie.  These are all now in the non-configurable startup defaults section.

* Clarify referred-to default is OTP not leveled

* Update leveled_bookie.erl

Handle xref issue with eunit include
2023-11-07 14:58:43 +00:00
Martin Sumner
b96518c32a
Use backwards compatible term_to_binary (#408)
* Use backwards compatible term_to_binary

So that where we have hashed term_to_binary output in OTP25 or earlier, that has will be matched in OTP 26.

* Test reliability

If all keys are put in order, the max_slots may not be used, as the driver at L0 is penciller cache size, and merge to new files (managed by the parameter) only occurs when there are overlapping files the level below
2023-10-05 10:33:20 +01:00
Martin Sumner
c4a32366df Merge branch 'develop-3.1' of https://github.com/martinsumner/leveled into develop-3.1 2023-10-03 18:44:40 +01:00
Martin Sumner
7a5cf251b3 Close in stages - waiting for releases (#411)
* Close in stages - waiting for releases

Have a consistent approach to closing the inker and the penciller - so that the close can be interrupted by releasing of snapshots.  Then any unreleased snapshots are closed before shutdown - with a 10s pause to give queries a short opportunity to finish.

This should address some issues, primarily seen (but very rarely) in test whereby post-rebuild destruction of parallel AAE keystores cause the crashing of aae_folds.

The primary benefit is to stop an attempt to release a snapshot that has in fact already finished does not cause a crash of the database on normal stop.  this was primarily an issue when shutdown is delayed by an ongoing journal compaction job.

* Boost default test budget for EQC

* Update test to use correct type

* Update following review

Avoid filtering out exited PIDs when closing snapshots by catching the exit exception when the Pid is down
2023-10-03 18:32:08 +01:00
Michael Klishin
bebd736211
Compile on Erlang 26.1 (#412)
* Compile on Erlang 26.1

* Define Key type

instead of assuming that the function only accepts
a specific StartKey
2023-10-03 18:29:54 +01:00
Martin Sumner
bc87273c76 Stop all inker call timeouts (#406) 2023-05-11 15:15:51 +01:00
Martin Sumner
7509191466
Initial support for OTP 26 (#395)
* Initial support for OTP 26

* Extend timeout in test
2023-03-14 16:27:08 +00:00
Martin Sumner
3d3d284805
Mas p401 coverage (#404)
* refactor leveled_sst from gen_fsm to gen_statem

* format_status/2 takes State and State Data
but this function is deprecated... put in for backward compatibility

* refactor leveled_cdb from gen_fsm to gen_statem

* disable irrelevant warning ignorer

* Remove unnecessary code paths

Only support messages, especially info messages, where they are possible.

* Mas i1820 offlinedeserialisation cbo (#403)

* Log report GC Info by manifest level

* Hibernate on range query

If Block Index Cache is not full, and we're not yielding

* Spawn to deserialise blocks offline

Hypothesis is that the growth in the heap necessary due to continual term_to_binary calls to deserialise blocks is wasting memory - so do this memory-intensive task in a short-lived process.

* Start with hibernate_after option

* Always build BIC

Testing indicates that the BIC itself is not a primary memory issue - the primary issue is due to a lack of garbage collection and a growing heap.

This change enhances the patch to offline serialisation so that:
- get_sqn & get_kv are standardised to build the BIC, and hibernate when it is built.
- the offline PId is linked to crash this process on failure (as would happen now).

* Standardise spawning for check_block/3

Now deserialise in both parts of the code.

* Only spawn for check_block if cache not full

* Update following review

* Standardise formatting

Make test more reliable.  Show no new compaction after third compaction.

* Update comments

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2023-03-13 11:46:08 +00:00
Martin Sumner
e06d2a538f Ensure backlog is tackled (#394)
Even when level 0 work is continuous.
2023-02-10 13:10:14 +00:00
Martin Sumner
314a4a9b07 Update timings - not replace (#392)
So that sample period will not be reset every update
2023-01-26 21:47:42 +00:00
Martin Sumner
a01c74f268 Mas i389 rebuildledger (#390)
* Protect penciller from empty ledger cache updates

which may occur when loading the ledger from the journal, after the ledger has been cleared.

* Score caching and randomisation

The test allkeydelta_journal_multicompact can occasionally fail when a compaction doesn't happen, but then does the next loop.  Suspect this is as a result of score caching, randomisation of key grabs for scoring, plus jitter on size boundaries.

Modified test for predictability.

Plus formatting changes

* Avoid small batches

Avoid small batches due to large SQN gaps

* Rationalise tests

Two tests overlaps with the new, much broader, replace_everything/1 test.  Ported over any remaining checks of interest and dropped two tests.
2023-01-18 11:45:10 +00:00
Martin Sumner
a033e280e6
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)

* Refactor penciller memory

In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:

TimeBucket	PatchA
a.0ms_to_1ms	18554
b.1ms_to_2ms	51778
c.2ms_to_3ms	696
d.3ms_to_5ms	220
e.5ms_to_8ms	59
f.8ms_to_13ms	40
g.13ms_to_21ms	364
h.21ms_to_34ms	277
i.34ms_to_55ms	34
j.55ms_to_89ms	17
k.89ms_to_144ms	21
l.144ms_to_233ms	31
m.233ms_to_377ms	45
n.377ms_to_610ms	52
o.610ms_to_987ms	59
p.987ms_to_1597ms	55
q.1597ms_to_2684ms	54
r.2684ms_to_4281ms	29
s.4281ms_to_6965ms	7
t.6295ms_to_11246ms	1

It is unclear why this varies so much.  The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable.  There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.

The refactoring here is two-fold:

- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;

- change the update to the index to be a simple extension of a list, rather than any conversion.

This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check.  Anticipation is that it may be more efficient in terms of memory changes.

* Compress SST index

Reduces the size of the leveled_sst index with two changes:

1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.

2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4

* Immediate hibernate

Reasons for delay in hibernate were not clear.

Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now.  This also means the log PC023 provides more accurate information.

* Refactor BIC

This patch avoids the following:

- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET

- Stops re-reading of all elements to discover high modified date

Also there appears to have been a bug where a missing HMD for the file is required to add to the cache.  However, now the cache may be erased without erasing the HMD.  This means that the cache can never be rebuilt

* Use correct size in test results

erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes

* Don't change summary record

As it is persisted as part of the file write, any change to the summary record cannot be rolled back

* Clerk to prompt L0 write

Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.

The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence.  the penciller can therefore make use of dead time this way

* Add push on journal compact

If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed.  Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.

This is only relevant to riak users with very off/full batch type workloads.

* Extend tests

To more consistently trigger all overload scenarios

* Fix range keys smaller than prefix

Can't make end key an empty binary  in this case, as it may be bigger than any keys within the range, but will appear to be smaller.

Unit tests and ct tests added to expose the potential issue

* Tidy-up

- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used

* Tidy-up

Remove pre-otp20 references.

Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.

Resolve failure to run on otp20 due to `-if` sttaement

* Tidy up

Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.

There were issues with coverage in eunit tests as the leveled_pclerk shut down.  This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.

* Remove R16 relic

* Further testing another issue

The StartKey must always be less than or equal to the prefix when the first N characters are stripped,  but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.

If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.

* Fix unit test

Unit test had a typo - and result interpretation had a misunderstanding.

* Code and spec tidy

Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.

This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree.  However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.

* Hibernate on BIC complete

There are three situations when the BIC becomes complete:

- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.

In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.

Previously on the the first base was covered.  Now all three are covered through the bic_complete message.

* Test all index keys have same term

This works functionally, but is not optimised (the term is replicated in the index)

* Summaries with same index term

If the summary index all have the same index term - only the object keys need to be indexes

* Simplify case statements

We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null

* OK for M == N

If M = N for the first key, it will have a suffix of <<>>.  This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.

If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).

The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues.  Now the prefix is being checked, then M == N is ok.

* Simplify

Correct the test to use a binary field in the range.

To avoid further issue, only apply filter when everything is a binary() type.

* Add test for head_only mode

When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries.  Double check they are handled as expected like object keys

* Revert previous change - must support typed buckets

Add assertion to confirm worthwhile optimisation

* Add support for configurable cache multiple (#375)

* Mas i370 patch e (#385)

Improvement to monitoring for efficiency and improved readability of logs and stats.

As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).

No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
Martin Sumner
d09f5c778b Query don't copy (#380)
* Query don't copy

Queries the manifest to avoid copying the whole manifest when taking a snapshot of a penciller to run a query.

Change the logging of fold setup in the Bookie to record the actual snapshot time (rather than the uninteresting and fast returning the the function which will request the snapshot).

A little tidy to avoid duplicating the ?MAX_LEVELS macro.

* Clarify log is of snapshot time not fold time

* Updates after review
2022-10-11 13:59:43 +01:00
Martin Sumner
28d3701f6e Mas i370 deletepending (#378)
* All confirmed deletions to complete when manifest is not lockable

Previously if there was ongoing work (i.e. the clerk had control over the manifest), the penciller could not confirm deletions.  Now it may confirm, and defer the required manifest update to a later date (prompted by another delete confirmation request).

* Refactor to update manifest even without on return of manifest

Rather than waiting on next delete confirmation request

* Update src/leveled_pmanifest.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

* Missing commit

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2022-05-24 10:05:16 +01:00
Martin Sumner
234e0066e8 Mas i370 deletepending (#377)
Previously delete_confirmation was blocked on work_ongoing.

However, if the penciller has a work backlog, work_ongoing may be a recurring problem ... and some files, may remain undeleted long after their use - lifetimes for L0 fails in particular have seen to rise from 10-15s to 5m +.

Letting L0 files linger can have a significant impact on memory. In put-heavy tests (e.g. when testing riak-admin transfers) the memory footprint of a riak node has bene observed peaking more than 80% above normal levels, when compared to using this patch.

This PR allows for deletes to be confirmed even when there is work ongoing, by postponing the updating of the manifest until the manifest is next returned from the clerk.

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2022-05-24 10:04:55 +01:00
Martin Sumner
f8485210ed
Mas i370 d31 sstmemory (#373)
* Don't use fetch_cache below the page_cache level

* Don't time fetches due to SQN checks

SQN checks are all background processes

* Hibernate on SQN check

SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.

* Add catch for hibernate timeout

* Scale cache_size with level

Based on volume testing.  Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels).  The cache at lower levels are proportionally much less efficient.  so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.

* OTP 24 fix to cherry-pick

* Make minimal change to previous setup

Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.

The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.

* Introduce new types

* Mas i370 d30 sstmemory (#374)


* Don't time fetches due to SQN checks

SQN checks are all background processes

* Hibernate on SQN check

SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.

* Add catch for hibernate timeout

* Scale cache_size with level

Based on volume testing.  Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels).  The cache at lower levels are proportionally much less efficient.  so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.

* Make minimal change to previous setup

Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.

The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.

* Introduce new types

* More memory management

Clear blockindex_cache on timeout, and manually GC on pclerk after work.

* Add further garbage collection prompt

After fetching level zero, significant change in references in the penciller memory, so prompt a garbage_collect() at this point.
2022-04-23 13:38:20 +01:00
Martin Sumner
75edb7293d Revert "Don't use fetch_cache below the page_cache level"
This reverts commit 656900e9ec.
2022-03-11 11:07:01 +00:00
Martin Sumner
5eae8e441f Revert "Don't time fetches due to SQN checks"
This reverts commit fb490b9af7.
2022-03-11 11:06:58 +00:00
Martin Sumner
2e0b20a071 Revert "Hibernate on SQN check"
This reverts commit eedd09a23d.
2022-03-11 11:06:51 +00:00
Martin Sumner
eedd09a23d Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.
2022-03-11 08:49:56 +00:00
Martin Sumner
fb490b9af7 Don't time fetches due to SQN checks
SQN checks are all background processes
2022-03-11 08:49:48 +00:00
Martin Sumner
656900e9ec Don't use fetch_cache below the page_cache level 2022-03-11 08:49:29 +00:00
Martin Sumner
79e0af27f6 otp25 support 2022-02-17 15:22:50 +00:00
Martin Sumner
8c4de27789 Fix spec for book_hotbackup/1
Returned function requires a backup path
2021-11-10 10:52:19 +00:00