Commit graph

26 commits

Author SHA1 Message Date
Martin Sumner
999ce8ba5b
Add ZSTD compression (#430)
* Add support for zstd and split compression

Add support for using zstd as an alternative to native, lz4.

Upgrade lz4 to v1.9.4 (with ARM enhancements).

Allow for split compression algorithms - i.e. use native on journal, but lz4 on ledger.

* Switch to AdRoll zstd

Development appears to be active and ongoing.  No issues running on different linux flavours.

* Use realistic bucket name

* Update README.md

* Switch branch

* Add comment following review
2024-01-23 16:25:03 +00:00
Martin Sumner
9bff70eedb Correct schema typo 2023-11-13 18:25:41 +00:00
Martin Sumner
9e804924a8
Mas d31 i416 (#418)
* Add compression controls (#417)

* Add compression controls

Add configuration options to allow for a compression algorithm of `none` to disable compression altogether.  Also an option to change the point in the LSM tree when compression is applied.

* Handle configurable defaults consistently

Move them into leveled.hrl.  This forces double-definitions to be resolved.

There are some other constants in leveled_bookie that are relevant outside of leveled_bookie.  These are all now in the non-configurable startup defaults section.

* Clarify referred-to default is OTP not leveled

* Update leveled_bookie.erl

Handle xref issue with eunit include
2023-11-07 14:58:43 +00:00
Martin Sumner
a033e280e6
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)

* Refactor penciller memory

In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:

TimeBucket	PatchA
a.0ms_to_1ms	18554
b.1ms_to_2ms	51778
c.2ms_to_3ms	696
d.3ms_to_5ms	220
e.5ms_to_8ms	59
f.8ms_to_13ms	40
g.13ms_to_21ms	364
h.21ms_to_34ms	277
i.34ms_to_55ms	34
j.55ms_to_89ms	17
k.89ms_to_144ms	21
l.144ms_to_233ms	31
m.233ms_to_377ms	45
n.377ms_to_610ms	52
o.610ms_to_987ms	59
p.987ms_to_1597ms	55
q.1597ms_to_2684ms	54
r.2684ms_to_4281ms	29
s.4281ms_to_6965ms	7
t.6295ms_to_11246ms	1

It is unclear why this varies so much.  The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable.  There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.

The refactoring here is two-fold:

- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;

- change the update to the index to be a simple extension of a list, rather than any conversion.

This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check.  Anticipation is that it may be more efficient in terms of memory changes.

* Compress SST index

Reduces the size of the leveled_sst index with two changes:

1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.

2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4

* Immediate hibernate

Reasons for delay in hibernate were not clear.

Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now.  This also means the log PC023 provides more accurate information.

* Refactor BIC

This patch avoids the following:

- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET

- Stops re-reading of all elements to discover high modified date

Also there appears to have been a bug where a missing HMD for the file is required to add to the cache.  However, now the cache may be erased without erasing the HMD.  This means that the cache can never be rebuilt

* Use correct size in test results

erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes

* Don't change summary record

As it is persisted as part of the file write, any change to the summary record cannot be rolled back

* Clerk to prompt L0 write

Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.

The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence.  the penciller can therefore make use of dead time this way

* Add push on journal compact

If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed.  Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.

This is only relevant to riak users with very off/full batch type workloads.

* Extend tests

To more consistently trigger all overload scenarios

* Fix range keys smaller than prefix

Can't make end key an empty binary  in this case, as it may be bigger than any keys within the range, but will appear to be smaller.

Unit tests and ct tests added to expose the potential issue

* Tidy-up

- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used

* Tidy-up

Remove pre-otp20 references.

Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.

Resolve failure to run on otp20 due to `-if` sttaement

* Tidy up

Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.

There were issues with coverage in eunit tests as the leveled_pclerk shut down.  This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.

* Remove R16 relic

* Further testing another issue

The StartKey must always be less than or equal to the prefix when the first N characters are stripped,  but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.

If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.

* Fix unit test

Unit test had a typo - and result interpretation had a misunderstanding.

* Code and spec tidy

Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.

This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree.  However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.

* Hibernate on BIC complete

There are three situations when the BIC becomes complete:

- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.

In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.

Previously on the the first base was covered.  Now all three are covered through the bic_complete message.

* Test all index keys have same term

This works functionally, but is not optimised (the term is replicated in the index)

* Summaries with same index term

If the summary index all have the same index term - only the object keys need to be indexes

* Simplify case statements

We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null

* OK for M == N

If M = N for the first key, it will have a suffix of <<>>.  This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.

If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).

The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues.  Now the prefix is being checked, then M == N is ok.

* Simplify

Correct the test to use a binary field in the range.

To avoid further issue, only apply filter when everything is a binary() type.

* Add test for head_only mode

When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries.  Double check they are handled as expected like object keys

* Revert previous change - must support typed buckets

Add assertion to confirm worthwhile optimisation

* Add support for configurable cache multiple (#375)

* Mas i370 patch e (#385)

Improvement to monitoring for efficiency and improved readability of logs and stats.

As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).

No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
Martin Sumner
f5ba2fc1b9 Amend defaults (#367)
* Amend defaults

3.0.9 testing has used different defaults for cache_size and penciller_cache_size.  There has been no noticeable drop in the performance of Riak using these defaults, so adopting here.

The new defaults have a lower memory requirement per vnode, which is useful as recent changes and performance test results are changing the standard recommendations for ring_size.

It is now preferred to choose larger ring_sizes by default (e.g. 256  for 6 - 12 nodes, 512 for 12 - 20 nodes, 1024 for 20 +).

By choosing larger ring sizes, the benefits of scaling up a cluster will continue to make a difference even as the node count goes beyond the recommended "correct" setting.  e.g. It might be reasonable (depending on hardware choices) to grow a ring_size=512 cluster to beyond 30 nodes, yet still be relatively efficient and performant at 8 nodes.

* Comment correction
2021-11-01 19:30:20 +00:00
Martin Sumner
182395ee00 Schema clarifications 2020-12-09 09:18:38 +00:00
Martin Sumner
a210aa6846 Promote cache when scanning
When scanning over a leveled store with a helper (e.g. segment filter and last modified date range), applying the filter will speed up the query when the block index cache is available to get_slots.

If it is not available, previously the leveled_sst did not then promote the cache after it had accessed the underlying blocks.

Now the code does this, and also when the cache has all been added, it extracts the largest last modified date so that sst files older than the passed in date can be immediately dismissed
2020-12-02 13:29:50 +00:00
Martin Sumner
be562c85cb Don't hide option 2020-11-27 03:12:35 +00:00
Martin Sumner
b4c79caf7a Allow for caching of compaction scores
Potentially reduce the overheads of scoring each file on every run.

The change also alters the default thresholds for compaction to favour longer runs (which will tend towards greater storage efficiency).
2020-11-27 02:35:27 +00:00
Martin Sumner
57d73fc548 Make page cache level configurable 2019-07-25 12:23:10 +01:00
Martin Sumner
dab9652f6c Add ability to control journal size by object count
This helps when there are files wiht large numbers of key deltas (and hence small values), where otherwise the object count may get out of control.
2019-07-25 09:45:23 +01:00
Martin Sumner
fd2e0e870c Align cache with default
Stop L0 from growing too large i.e. 127 * 4K is more than 10 times bigger than the default.
2019-02-25 23:35:12 +00:00
Martin Sumner
10f250e4be Update leveled.schema
To set log level
2019-02-13 14:36:45 +00:00
Martin Sumner
32ea4380b2 correct defaults 2018-12-18 11:22:25 +00:00
Martin Sumner
8ccb985775 Typo 2018-12-14 22:43:21 +00:00
Martin Sumner
5b578ea2bb Add new snapshot timeout to multi-backend 2018-12-14 19:57:32 +00:00
Martin Sumner
e6d868f8cd Merge branch 'master' into mas-i233-multibackend 2018-12-14 19:55:34 +00:00
Martin Sumner
ceea196cc0 Update priv/leveled.schema
The snapshottimeouts would not normally eb changed - so make them hidden
2018-12-14 14:27:44 +00:00
Martin Sumner
ef068326a0 Update priv/leveled.schema 2018-12-14 14:13:44 +00:00
Martin Sumner
45f62ffa56 Make all multi schema hidden 2018-12-13 16:24:33 +00:00
Martin Sumner
f211e587f7 Make multi_schema mainly hidden
As wiht other schemas - make the multi_backend schema mainly hidden
2018-12-12 16:16:47 +00:00
Martin Sumner
52c523dbab Add multi_backend schema 2018-12-12 09:56:06 +00:00
Martin Sumner
66a75923e8 Expose configuration of Compactio Percentage targets
I think these should be set differently, so make them configurable.
2018-07-23 12:46:42 +01:00
Martin Sumner
d60725ad1b Add schema document 2018-06-08 13:23:45 +01:00
Martin Sumner
95aa1c632b Comment change 2018-06-07 16:56:10 +01:00
Martin Sumner
feb7149a8d Add schema file 2016-11-26 22:10:51 +00:00