2024-11-13 13:56:57 +00:00
%% -------- LOG ---------
%%
%% Centralised logging, to make it easier to change log implementation.
%%
%% The use of a ?LOGBASE map is a personal preference. With formatting of code
%% using maximum column widths, I prefer not to have log text within the code
%% itself, as there may be a temptation to make log text misleadingly terse to
%% make the code more readable.
%%
%% This means that using logger's capability to add actual code line references
%% to the log becomes mute - as all logs log from the same code line. However,
%% the process does enforce the use of log references to provide a simple way
%% to search the code and find where the log occurred. This can be helpful
%% when there are frequent code changes which may change line numbers between
%% releases (whereas log references stay stable). The log references
%% themselves can be helpful when optimising query times in splunk-like log
%% indexing tools.
%%
%% There are overheads with the approach (e.g. the maps:get/2 call for each log
%% ). However, the eprof testing of leveled indicates that this is not a
%% relatively significant overhead.
2016-11-02 18:14:46 +00:00
- module ( leveled_log ) .
2024-09-06 11:18:24 +01:00
- include_lib ( " kernel/include/logger.hrl " ) .
2016-11-02 18:14:46 +00:00
- export ( [ log / 2 ,
2016-12-22 14:03:31 +00:00
log_timer / 3 ,
2018-12-10 16:09:11 +01:00
log_randomtimer / 4 ] ) .
2016-11-02 18:14:46 +00:00
2024-11-13 13:56:57 +00:00
- export ( [ log / 5 , log_timer / 6 ] ) .
2018-12-11 20:42:00 +00:00
- export ( [ set_loglevel / 1 ,
2019-06-13 14:58:32 +01:00
set_databaseid / 1 ,
2018-12-11 20:42:00 +00:00
add_forcedlogs / 1 ,
remove_forcedlogs / 1 ,
get_opts / 0 ,
2018-12-11 21:59:57 +00:00
save / 1 ,
return_settings / 0 ] ) .
2018-12-11 20:42:00 +00:00
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- record ( log_options ,
{ log_level = info : : log_level ( ) ,
forced_logs = [ ] : : [ atom ( ) ] ,
database_id = 0 : : non_neg_integer ( ) } ) .
2018-12-10 16:09:11 +01:00
2024-09-06 11:18:24 +01:00
- type log_level ( ) : : debug | info | warning | error | critical .
2018-12-11 20:42:00 +00:00
- type log_options ( ) : : #log_options { } .
2024-11-13 13:56:57 +00:00
- type log_base ( ) : : #{ atom ( ) = > { log_level ( ) , binary ( ) } } .
2018-12-11 20:42:00 +00:00
2024-11-13 13:56:57 +00:00
- export_type ( [ log_options / 0 , log_level / 0 , log_base / 0 ] ) .
2018-11-30 10:30:50 +01:00
2024-09-06 11:18:24 +01:00
- define ( LOG_LEVELS , [ debug , info , warning , error , critical ] ) .
2018-11-30 10:30:50 +01:00
- define ( DEFAULT_LOG_LEVEL , error ) .
2016-12-22 14:03:31 +00:00
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- define ( LOGBASE ,
#{
g0001 = >
{ info , < < " Generic log point " > > } ,
g0002 = >
{ info , < < " Generic log point with term ~w " > > } ,
d0001 = >
{ info , < < " Generic debug log " > > } ,
b0001 = >
{ info , < < " Bookie starting with Ink ~w Pcl ~w " > > } ,
b0002 = >
{ info , < < " Snapshot starting with Ink ~w Pcl ~w " > > } ,
b0003 = >
{ info , < < " Bookie closing for reason ~w " > > } ,
2023-11-08 09:18:01 +00:00
b0004 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Bookie snapshot exiting as master store ~w is down for reason ~p " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
b0005 = >
{ info , < < " LedgerSQN= ~w at startup " > > } ,
b0006 = >
{ info , < < " Reached end of load batch with SQN ~w " > > } ,
b0008 = >
{ info , < < " Bucket list finds no more results " > > } ,
b0009 = >
{ debug , < < " Bucket list finds Bucket ~w " > > } ,
b0011 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Call to destroy the store and so all files to be removed " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
b0013 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Long running task took ~w microseconds with task_type= ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
b0015 = >
{ info , < < " Put timing with sample_count= ~w ink_time= ~w prep_time= ~w mem_time= ~w with total_object_size= ~w with sample_period= ~w seconds " > > } ,
b0016 = >
{ info , < < " Get timing with sample_count= ~w and head_time= ~w body_time= ~w with fetch_count= ~w with sample_period= ~w seconds " > > } ,
b0017 = >
{ info , < < " Snapshot timing with sample_count= ~w and bookie_time= ~w pcl_time= ~w with sample_period= ~w seconds " > > } ,
b0018 = >
{ info , < < " Positive HEAD responses timed with sample_count= ~w and cache_count= ~w found_count= ~w fetch_ledger_time= ~w fetch_ledgercache_time= ~w rsp_time= ~w notfound_time= ~w with sample_period= ~w seconds " > > } ,
b0019 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Use of book_indexfold with constraint of Bucket ~w with no StartKey is deprecated " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
b0020 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Ratio of penciller cache size ~w to bookie's memory cache size ~w is larger than expected " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
r0001 = >
{ debug , < < " Object fold to process batch of ~w objects " > > } ,
p0001 = >
{ debug , < < " Ledger snapshot ~w registered " > > } ,
p0003 = >
{ debug , < < " Ledger snapshot ~w released " > > } ,
p0004 = >
{ debug , < < " Remaining ledger snapshots are ~w " > > } ,
p0005 = >
{ debug , < < " Delete confirmed as file ~s is removed from Manifest " > > } ,
p0007 = >
2023-11-08 09:18:01 +00:00
{ debug , < < " Shutdown complete for cloned Penciller for reason ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
p0008 = >
{ info , < < " Penciller closing for reason ~w " > > } ,
p0010 = >
{ info , < < " level zero discarded_count= ~w on close of Penciller " > > } ,
p0011 = >
2023-11-08 09:18:01 +00:00
{ debug , < < " Shutdown complete for Penciller for reason ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
p0012 = >
{ info , < < " Store to be started based on manifest sequence number of ~w " > > } ,
p0013 = >
2023-11-08 09:18:01 +00:00
{ info , < < " Seqence number of 0 indicates no valid manifest " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
p0014 = >
{ info , < < " Maximum sequence number of ~w found in nonzero levels " > > } ,
p0015 = >
{ info , < < " L0 file found ~s " > > } ,
p0016 = >
{ info , < < " L0 file had maximum sequence number of ~w " > > } ,
p0017 = >
{ info , < < " No L0 file found " > > } ,
p0018 = >
{ info , < < " Response to push_mem of returned with cache_size= ~w L0_pending= ~w merge_backlog= ~w cachelines_full= ~w " > > } ,
p0019 = >
{ info , < < " Rolling level zero to filename ~s at ledger sqn ~w " > > } ,
p0024 = >
2023-02-08 10:54:56 +00:00
{ info , < < " Outstanding compaction work items of ~w with backlog status of ~w L0 full ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
p0029 = >
{ info , < < " L0 completion confirmed and will transition to not pending " > > } ,
p0030 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " We're doomed - intention recorded to destroy all files " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
p0031 = >
{ info , < < " Completion of update to levelzero with cache_size= ~w level0_due= ~w change_pending= ~w MinSQN= ~w MaxSQN= ~w " > > } ,
p0032 = >
{ info , < < " Fetch head timing with sample_count= ~w and level timings of foundmem_time= ~w found0_time= ~w found1_time= ~w found2_time= ~w found3_time= ~w foundlower_time= ~w missed_time= ~w with counts of foundmem_count= ~w found0_count= ~w found1_count= ~w found2_count= ~w found3_count= ~w foundlower_count= ~w missed_count= ~w with sample_period= ~w seconds " > > } ,
p0033 = >
{ error , < < " Corrupted manifest file at path ~s to be ignored due to error ~s " > > } ,
p0035 = >
{ info , < < " Startup with Manifest SQN of ~w " > > } ,
p0037 = >
{ debug , < < " Merging of penciller L0 tree from size ~w complete " > > } ,
p0038 = >
{ info , < < " Timeout of snapshot with pid= ~w at SQN= ~w at TS ~w set to timeout= ~w " > > } ,
p0039 = >
{ debug , < < " Failed to release pid= ~w leaving SnapshotCount= ~w and MinSQN= ~w " > > } ,
p0040 = >
{ info , < < " Archiving filename ~s as unused at startup " > > } ,
p0041 = >
{ info , < < " Penciller manifest switched from SQN ~w to ~w " > > } ,
2023-10-03 18:30:40 +01:00
p0042 = >
{ info , < < " Deferring shutdown due to snapshot_count= ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
pc001 = >
{ info , < < " Penciller's clerk ~w started with owner ~w " > > } ,
pc005 = >
{ info , < < " Penciller's Clerk ~w shutdown now complete for reason ~w " > > } ,
pc007 = >
{ debug , < < " Clerk prompting Penciller regarding manifest change " > > } ,
pc008 = >
{ info , < < " Merge from level ~w to merge into ~w files below " > > } ,
pc009 = >
{ debug , < < " File ~s to simply switch levels to level ~w " > > } ,
pc010 = >
{ info , < < " Merge to be commenced for FileToMerge= ~s with MSN= ~w " > > } ,
pc011 = >
{ info , < < " Merge completed with MSN= ~w to Level= ~w and FileCounter= ~w " > > } ,
pc012 = >
{ debug , < < " File to be created as part of MSN= ~w Filename= ~s IsBasement= ~w " > > } ,
pc013 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Merge resulted in empty file ~s " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
pc015 = >
{ info , < < " File created " > > } ,
pc016 = >
{ info , < < " Slow fetch from SFT ~w of ~w us at level ~w with result ~w " > > } ,
pc017 = >
{ debug , < < " Notified clerk of manifest change " > > } ,
pc018 = >
{ info , < < " Saved manifest file " > > } ,
pc019 = >
{ debug , < < " After ~s level ~w is ~w " > > } ,
pc021 = >
{ debug , < < " Prompting deletions at ManifestSQN= ~w " > > } ,
pc022 = >
{ debug , < < " Storing reference to deletions at ManifestSQN= ~w " > > } ,
pc023 = >
{ info , < < " At level= ~w file_count= ~w avg_mem= ~w file with most memory fn= ~s p= ~w mem= ~w " > > } ,
pc024 = >
{ info , < < " Grooming compaction picked file with tomb_count= ~w " > > } ,
2023-03-13 11:46:08 +00:00
pc025 = >
{ info , < < " At level= ~w file_count= ~w average words for heap_block_size= ~w heap_size= ~w recent_size= ~w bin_vheap_size= ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
pm002 = >
{ info , < < " Completed dump of L0 cache to list of l0cache_size= ~w " > > } ,
sst03 = >
{ info , < < " Opening SST file with filename ~s slot_count= ~w and max sqn ~w " > > } ,
sst04 = >
{ debug , < < " Exit called for reason ~w on filename ~s " > > } ,
sst05 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Rename rogue filename ~s to ~s " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
sst06 = >
{ debug , < < " File ~s has been set for delete " > > } ,
sst07 = >
{ info , < < " Exit called and now clearing ~s " > > } ,
sst08 = >
{ info , < < " Completed creation of ~s at level ~w with max sqn ~w " > > } ,
sst09 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Read request exposes slot with bad CRC " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
sst10 = >
{ debug , < < " Expansion sought to support pointer to pid ~w status ~w " > > } ,
sst11 = >
{ info , < < " Level zero creation timings in microseconds pmem_fetch= ~w merge_lists= ~w build_slots= ~w build_summary= ~w read_switch= ~w " > > } ,
sst12 = >
{ info , < < " SST Timings at level= ~w for sample_count= ~w at timing points notfound_time= ~w fetchcache_time= ~w slotcached_time= ~w slotnoncached_time= ~w exiting at points notfound_count= ~w fetchcache_count= ~w slotcached_count= ~w slotnoncached_count= ~w with sample_period= ~w seconds " > > } ,
sst13 = >
{ info , < < " SST merge list build timings of fold_toslot= ~w slot_hashlist= ~w slot_serialise= ~w slot_finish= ~w is_basement= ~w level= ~w " > > } ,
sst14 = >
{ debug , < < " File ~s has completed BIC " > > } ,
i0001 = >
{ info , < < " Unexpected failure to fetch value for Key= ~w SQN= ~w with reason ~w " > > } ,
i0002 = >
{ debug , < < " Journal snapshot ~w registered at SQN ~w " > > } ,
i0003 = >
{ debug , < < " Journal snapshot ~w released " > > } ,
i0004 = >
{ info , < < " Remaining number of journal snapshots is ~w " > > } ,
i0005 = >
{ info , < < " Inker closing journal for reason ~w " > > } ,
i0006 = >
{ info , < < " Close triggered with journal_sqn= ~w and manifest_sqn= ~w " > > } ,
i0007 = >
{ info , < < " Inker manifest when closing is: " > > } ,
i0008 = >
{ info , < < " Put to new active journal required roll and manifest write " > > } ,
i0009 = >
{ info , < < " Updated manifest on startup: " > > } ,
i0010 = >
{ info , < < " Unchanged manifest on startup: " > > } ,
i0011 = >
{ info , < < " Manifest is empty, starting from manifest SQN 1 " > > } ,
i0012 = >
{ info , < < " Head manifest entry ~s is complete so new active journal required " > > } ,
i0013 = >
{ info , < < " File ~s to be removed from manifest " > > } ,
i0014 = >
{ info , < < " On startup loading from filename ~s from SQN ~w " > > } ,
i0015 = >
{ info , < < " Opening manifest file at ~s with SQN ~w " > > } ,
i0016 = >
{ info , < < " Writing new version of manifest for manifestSQN= ~w " > > } ,
i0017 = >
{ debug , < < " At SQN= ~w journal has filename ~s " > > } ,
i0018 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " We're doomed - intention recorded to destroy all files " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
i0020 = >
{ info , < < " Journal backup completed to path= ~s with file_count= ~w " > > } ,
i0021 = >
{ info , < < " Ingoring filename= ~s with SQN= ~w and JournalSQN= ~w " > > } ,
i0022 = >
{ info , < < " Removing filename= ~s from backup folder as not in backup " > > } ,
i0023 = >
{ info , < < " Backup commencing into folder with ~w existing files " > > } ,
i0024 = >
{ info , < < " Prompted roll at NewSQN= ~w " > > } ,
i0025 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Journal SQN of ~w is below Ledger SQN of ~w anti-entropy will be required " > > } ,
2023-10-03 18:30:40 +01:00
i0026 = >
{ info , < < " Deferring shutdown due to snapshot_count= ~w " > > } ,
2023-11-08 09:18:01 +00:00
i0027 = >
{ debug , < < " Shutdown complete for cloned Inker for reason ~w " > > } ,
i0028 = >
{ debug , < < " Shutdown complete for Inker for reason ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
ic001 = >
{ info , < < " Closed for reason ~w so maybe leaving garbage " > > } ,
ic002 = >
{ info , < < " Clerk updating Inker as compaction complete of ~w files " > > } ,
ic003 = >
{ info , < < " Scoring of compaction runs complete with highest score= ~w with run of run_length= ~w " > > } ,
ic004 = >
{ info , < < " Score= ~w with mean_byte_jump= ~w for filename ~s " > > } ,
ic005 = >
{ info , < < " Compaction to be performed on file_count= ~w with compaction_score= ~w " > > } ,
ic006 = >
{ info , < < " Filename ~s is part of compaction run " > > } ,
ic007 = >
{ info , < < " Clerk has completed compaction process " > > } ,
ic008 = >
{ info , < < " Compaction source ~s has yielded ~w positions " > > } ,
ic009 = >
{ info , < < " Generate journal for compaction with filename ~s " > > } ,
ic010 = >
{ info , < < " Clearing journal with filename ~s " > > } ,
ic011 = >
{ info , < < " Not clearing filename ~s as modified delta is only ~w seconds " > > } ,
ic012 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Tag ~w not found in Strategy ~w - maybe corrupted " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
ic013 = >
2024-11-13 13:56:57 +00:00
{ warning , < < " File with name ~s to be ignored in manifest as scanning for first key returned empty - maybe corrupted " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
ic014 = >
{ info , < < " Compaction to be run with strategy ~w and max_run_length ~w " > > } ,
cdb01 = >
{ info , < < " Opening file for writing with filename ~s " > > } ,
cdb02 = >
{ info , < < " Opening file for reading with filename ~s " > > } ,
cdb03 = >
{ info , < < " Re-opening file for reading with filename ~s " > > } ,
cdb04 = >
{ info , < < " Deletion confirmed for file ~s at ManifestSQN ~w " > > } ,
cdb05 = >
{ info , < < " Closing of filename ~s from state ~w for reason ~w " > > } ,
cdb06 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " File to be truncated at last position of ~w with end of file at ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
cdb07 = >
{ info , < < " Hashtree index computed " > > } ,
cdb08 = >
{ info , < < " Renaming file from ~s to ~s for which existence is ~w " > > } ,
cdb09 = >
{ info , < < " Failure to read Key/Value at Position ~w in scan this may be the end of the file " > > } ,
cdb10 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " CRC check failed due to error= ~s " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
cdb12 = >
{ info , < < " Hashtree index written " > > } ,
cdb13 = >
{ debug , < < " Write options of ~w " > > } ,
cdb14 = >
{ info , < < " Microsecond timings for hashtree build of to_list= ~w sort= ~w build= ~w " > > } ,
cdb15 = >
{ info , < < " Collision in search for hash ~w " > > } ,
cdb18 = >
{ info , < < " Handled return and write of hashtable " > > } ,
cdb19 = >
{ info , < < " Sample timings in microseconds for sample_count= ~w with totals of cycle_count= ~w index_time= ~w read_time= ~w with sample_period= ~w seconds " > > } ,
cdb20 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " Error ~w caught when safe reading a file to length ~w " > > } ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
cdb21 = >
2024-09-06 11:18:24 +01:00
{ warning , < < " File ~s to be deleted but already gone " > > }
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
} ) .
2016-11-02 18:14:46 +00:00
2018-12-10 16:09:11 +01:00
2018-12-11 20:42:00 +00:00
%%%============================================================================
%%% Manage Log Options
%%%============================================================================
2018-12-10 16:09:11 +01:00
2018-12-11 20:42:00 +00:00
- spec set_loglevel ( log_level ( ) ) - > ok .
%% @doc
%% Set the log level for this PID
set_loglevel ( LogLevel ) when is_atom ( LogLevel ) - >
LO = get_opts ( ) ,
UpdLO = LO #log_options { log_level = LogLevel } ,
save ( UpdLO ) .
2019-06-13 14:58:32 +01:00
- spec set_databaseid ( non_neg_integer ( ) ) - > ok .
%% @doc
%% Set the Database ID for this PID
set_databaseid ( DBid ) when is_integer ( DBid ) - >
LO = get_opts ( ) ,
UpdLO = LO #log_options { database_id = DBid } ,
save ( UpdLO ) .
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- spec add_forcedlogs ( list ( atom ( ) ) ) - > ok .
2018-12-11 20:42:00 +00:00
%% @doc
%% Add a forced log to the list of forced logs. this will cause the log of this
%% logReference to be logged even if the log_level of the process would not
%% otherwise require it to be logged - e.g. to fire an 'INFO' log when running
%% at an 'ERROR' log level.
add_forcedlogs ( LogRefs ) - >
LO = get_opts ( ) ,
ForcedLogs = LO #log_options.forced_logs ,
UpdLO = LO #log_options { forced_logs = lists : usort ( LogRefs ++ ForcedLogs ) } ,
save ( UpdLO ) .
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- spec remove_forcedlogs ( list ( atom ( ) ) ) - > ok .
2018-12-11 20:42:00 +00:00
%% @doc
%% Remove a forced log from the list of forced logs
remove_forcedlogs ( LogRefs ) - >
LO = get_opts ( ) ,
ForcedLogs = LO #log_options.forced_logs ,
UpdLO = LO #log_options { forced_logs = lists : subtract ( ForcedLogs , LogRefs ) } ,
save ( UpdLO ) .
- spec save ( log_options ( ) ) - > ok .
%% @doc
%% Save the log options to the process dictionary
2018-12-10 16:09:11 +01:00
save ( #log_options { } = LO ) - >
put ( '$leveled_log_options' , LO ) ,
ok .
2018-12-11 20:42:00 +00:00
- spec get_opts ( ) - > log_options ( ) .
%% @doc
%% Retrieve the log options from the process dictionary if present.
2018-12-10 16:09:11 +01:00
get_opts ( ) - >
case get ( '$leveled_log_options' ) of
#log_options { } = LO - >
2018-12-11 20:42:00 +00:00
LO ;
_ - >
2024-09-06 11:18:24 +01:00
#log_options {
log_level = ? DEFAULT_LOG_LEVEL ,
forced_logs = [ ]
}
2018-12-10 16:09:11 +01:00
end .
2016-11-02 18:14:46 +00:00
2024-11-13 13:37:13 +00:00
- spec return_settings ( ) - > { log_level ( ) , list ( atom ( ) ) } .
2018-12-11 21:59:57 +00:00
%% @doc
%% Return the settings outside of the record
return_settings ( ) - >
LO = get_opts ( ) ,
{ LO #log_options.log_level , LO #log_options.forced_logs } .
2018-12-11 20:42:00 +00:00
%%%============================================================================
%%% Prompt Logs
%%%============================================================================
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- spec log ( atom ( ) , list ( ) ) - > ok .
2016-11-02 18:14:46 +00:00
log ( LogReference , Subs ) - >
2024-11-13 13:56:57 +00:00
log ( LogReference , Subs , ? LOG_LEVELS , ? LOGBASE , backend ) .
2017-06-01 22:03:51 +01:00
2024-11-13 13:56:57 +00:00
- spec log ( atom ( ) , list ( ) , list ( log_level ( ) ) , log_base ( ) , atom ( ) ) - > ok .
log ( LogRef , Subs , SupportedLevels , LogBase , Tag ) - >
{ LogLevel , Log } = maps : get ( LogRef , LogBase ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
LogOpts = get_opts ( ) ,
2024-11-13 13:56:57 +00:00
case should_i_log ( LogLevel , SupportedLevels , LogRef , LogOpts ) of
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
true - >
DBid = LogOpts #log_options.database_id ,
Prefix =
2024-09-06 11:18:24 +01:00
log_prefix ( LogRef , DBid , self ( ) ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
Suffix = < < " ~n " > > ,
2024-09-06 11:18:24 +01:00
? LOG (
LogLevel ,
unicode : characters_to_list ( [ Prefix , Log , Suffix ] ) ,
Subs ,
2024-11-13 13:56:57 +00:00
#{ log_type = > Tag }
2024-09-06 11:18:24 +01:00
) ;
2016-11-02 18:14:46 +00:00
false - >
ok
end .
2019-06-13 14:58:32 +01:00
should_i_log ( LogLevel , Levels , LogRef , LogOpts ) - >
#log_options { log_level = CurLevel , forced_logs = ForcedLogs } = LogOpts ,
2018-12-05 00:17:39 +00:00
case lists : member ( LogRef , ForcedLogs ) of
true - >
true ;
false - >
2018-12-10 16:09:11 +01:00
if CurLevel == LogLevel - >
true ;
true - >
2018-12-05 00:17:39 +00:00
is_active_level ( Levels , CurLevel , LogLevel )
end
2018-11-30 10:30:50 +01:00
end .
is_active_level ( [ L | _ ] , L , _ ) - > true ;
is_active_level ( [ L | _ ] , _ , L ) - > false ;
2018-12-11 22:25:37 +00:00
is_active_level ( [ _ | T ] , C , L ) - > is_active_level ( T , C , L ) .
2016-11-04 17:28:04 +00:00
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- spec log_timer ( atom ( ) , list ( ) , erlang : timestamp ( ) ) - > ok .
2016-11-02 18:14:46 +00:00
log_timer ( LogReference , Subs , StartTime ) - >
2024-11-13 13:56:57 +00:00
log_timer ( LogReference , Subs , StartTime , ? LOG_LEVELS , ? LOGBASE , backend ) .
2017-06-01 22:03:51 +01:00
2024-11-13 13:56:57 +00:00
- spec log_timer (
atom ( ) , list ( ) , erlang : timestamp ( ) , list ( log_level ( ) ) , log_base ( ) , atom ( ) )
- > ok .
log_timer ( LogRef , Subs , StartTime , SupportedLevels , LogBase , Tag ) - >
{ LogLevel , Log } = maps : get ( LogRef , LogBase ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
LogOpts = get_opts ( ) ,
case should_i_log ( LogLevel , SupportedLevels , LogRef , LogOpts ) of
true - >
DBid = LogOpts #log_options.database_id ,
Prefix =
2024-09-06 11:18:24 +01:00
log_prefix ( LogRef , DBid , self ( ) ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
Suffix = < < " ~n " > > ,
Duration = duration_text ( StartTime ) ,
2024-09-06 11:18:24 +01:00
? LOG (
LogLevel ,
unicode : characters_to_list ( [ Prefix , Log , Duration , Suffix ] ) ,
Subs ,
2024-11-13 13:56:57 +00:00
#{ log_type = > Tag }
2024-09-06 11:18:24 +01:00
) ;
2016-11-02 18:14:46 +00:00
false - >
ok
end .
2024-11-13 13:56:57 +00:00
- spec log_randomtimer ( atom ( ) , list ( ) , erlang : timestamp ( ) , float ( ) ) - > term ( ) .
2017-03-06 10:34:56 +00:00
log_randomtimer ( LogReference , Subs , StartTime , RandomProb ) - >
2024-11-13 13:37:13 +00:00
R = rand : uniform ( ) ,
2017-03-06 13:51:38 +00:00
case R < RandomProb of
2017-03-06 10:34:56 +00:00
true - >
log_timer ( LogReference , Subs , StartTime ) ;
false - >
ok
end .
2024-09-06 11:18:24 +01:00
- spec log_prefix ( atom ( ) , non_neg_integer ( ) , pid ( ) ) - > io_lib : chars ( ) .
log_prefix ( LogRef , DBid , Pid ) - >
[ " log_ref= " , atom_to_list ( LogRef ) ,
" db_id= " , integer_to_list ( DBid ) ,
" pid= " , pid_to_list ( Pid ) , " " ] .
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- spec duration_text ( erlang : timestamp ( ) ) - > io_lib : chars ( ) .
duration_text ( StartTime ) - >
case timer : now_diff ( os : timestamp ( ) , StartTime ) of
US when US > 1000 - >
[ " with us_duration= " , integer_to_list ( US ) ,
" or ms_duration= " , integer_to_list ( US div 1000 ) ] ;
US - >
[ " with us_duration= " , integer_to_list ( US ) ]
end .
2016-12-22 15:45:38 +00:00
2016-11-02 18:14:46 +00:00
%%%============================================================================
%%% Test
%%%============================================================================
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
- ifdef ( TEST ) .
2016-11-02 18:14:46 +00:00
2023-03-14 16:27:08 +00:00
- include_lib ( " eunit/include/eunit.hrl " ) .
should_i_log ( LogLevel , Levels , LogRef ) - >
should_i_log ( LogLevel , Levels , LogRef , get_opts ( ) ) .
2016-11-02 18:14:46 +00:00
2024-09-06 11:18:24 +01:00
log_warning_test ( ) - >
2024-11-13 13:56:57 +00:00
ok = log ( g0001 , [ ] , [ warning , error ] , ? LOGBASE , backend ) ,
ok =
log_timer (
g0001 , [ ] , os : timestamp ( ) , [ warning , error ] , ? LOGBASE , backend
) .
log_wrongkey_test ( ) - >
? assertException (
error ,
{ badkey , wrong0001 } ,
log ( wrong0001 , [ ] , [ warning , error ] , ? LOGBASE , backend )
2024-11-18 14:51:23 +00:00
) .
logtimer_wrongkey_test ( ) - >
ST = os : timestamp ( ) ,
% Note -
% An issue with cover means issues with ?assertException, where the
% function being tested is split across lines, the closing bracket on the
% next line is not recognised as being covered. We want 100% coverage, so
% need to write this on one line.
2024-11-13 13:56:57 +00:00
? assertException (
error ,
{ badkey , wrong0001 } ,
2024-11-18 14:51:23 +00:00
log_timer ( wrong0001 , [ ] , ST , [ warning , error ] , ? LOGBASE , backend )
2024-11-13 13:56:57 +00:00
) .
2017-06-01 22:03:51 +01:00
2018-12-05 00:17:39 +00:00
shouldilog_test ( ) - >
2018-12-11 20:42:00 +00:00
ok = set_loglevel ( debug ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
? assertMatch ( true , should_i_log ( info , ? LOG_LEVELS , g0001 ) ) ,
2018-12-11 20:42:00 +00:00
ok = set_loglevel ( info ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
? assertMatch ( true , should_i_log ( info , ? LOG_LEVELS , g0001 ) ) ,
ok = add_forcedlogs ( [ g0001 ] ) ,
2018-12-11 20:42:00 +00:00
ok = set_loglevel ( error ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
? assertMatch ( true , should_i_log ( info , ? LOG_LEVELS , g0001 ) ) ,
? assertMatch ( false , should_i_log ( info , ? LOG_LEVELS , g0002 ) ) ,
ok = remove_forcedlogs ( [ g0001 ] ) ,
2018-12-11 20:42:00 +00:00
ok = set_loglevel ( info ) ,
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
? assertMatch ( false , should_i_log ( debug , ? LOG_LEVELS , d0001 ) ) .
2018-12-10 16:09:11 +01:00
2018-12-11 22:25:37 +00:00
badloglevel_test ( ) - >
% Set a bad log level - and everything logs
? assertMatch ( true , is_active_level ( ? LOG_LEVELS , debug , unsupported ) ) ,
? assertMatch ( true , is_active_level ( ? LOG_LEVELS , critical , unsupported ) ) .
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)
* Refactor penciller memory
In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:
TimeBucket PatchA
a.0ms_to_1ms 18554
b.1ms_to_2ms 51778
c.2ms_to_3ms 696
d.3ms_to_5ms 220
e.5ms_to_8ms 59
f.8ms_to_13ms 40
g.13ms_to_21ms 364
h.21ms_to_34ms 277
i.34ms_to_55ms 34
j.55ms_to_89ms 17
k.89ms_to_144ms 21
l.144ms_to_233ms 31
m.233ms_to_377ms 45
n.377ms_to_610ms 52
o.610ms_to_987ms 59
p.987ms_to_1597ms 55
q.1597ms_to_2684ms 54
r.2684ms_to_4281ms 29
s.4281ms_to_6965ms 7
t.6295ms_to_11246ms 1
It is unclear why this varies so much. The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable. There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.
The refactoring here is two-fold:
- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;
- change the update to the index to be a simple extension of a list, rather than any conversion.
This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check. Anticipation is that it may be more efficient in terms of memory changes.
* Compress SST index
Reduces the size of the leveled_sst index with two changes:
1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.
2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4
* Immediate hibernate
Reasons for delay in hibernate were not clear.
Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now. This also means the log PC023 provides more accurate information.
* Refactor BIC
This patch avoids the following:
- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET
- Stops re-reading of all elements to discover high modified date
Also there appears to have been a bug where a missing HMD for the file is required to add to the cache. However, now the cache may be erased without erasing the HMD. This means that the cache can never be rebuilt
* Use correct size in test results
erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes
* Don't change summary record
As it is persisted as part of the file write, any change to the summary record cannot be rolled back
* Clerk to prompt L0 write
Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.
The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence. the penciller can therefore make use of dead time this way
* Add push on journal compact
If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed. Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.
This is only relevant to riak users with very off/full batch type workloads.
* Extend tests
To more consistently trigger all overload scenarios
* Fix range keys smaller than prefix
Can't make end key an empty binary in this case, as it may be bigger than any keys within the range, but will appear to be smaller.
Unit tests and ct tests added to expose the potential issue
* Tidy-up
- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used
* Tidy-up
Remove pre-otp20 references.
Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.
Resolve failure to run on otp20 due to `-if` sttaement
* Tidy up
Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.
There were issues with coverage in eunit tests as the leveled_pclerk shut down. This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.
* Remove R16 relic
* Further testing another issue
The StartKey must always be less than or equal to the prefix when the first N characters are stripped, but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.
If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.
* Fix unit test
Unit test had a typo - and result interpretation had a misunderstanding.
* Code and spec tidy
Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.
This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree. However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.
* Hibernate on BIC complete
There are three situations when the BIC becomes complete:
- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.
In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.
Previously on the the first base was covered. Now all three are covered through the bic_complete message.
* Test all index keys have same term
This works functionally, but is not optimised (the term is replicated in the index)
* Summaries with same index term
If the summary index all have the same index term - only the object keys need to be indexes
* Simplify case statements
We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null
* OK for M == N
If M = N for the first key, it will have a suffix of <<>>. This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.
If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).
The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues. Now the prefix is being checked, then M == N is ok.
* Simplify
Correct the test to use a binary field in the range.
To avoid further issue, only apply filter when everything is a binary() type.
* Add test for head_only mode
When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries. Double check they are handled as expected like object keys
* Revert previous change - must support typed buckets
Add assertion to confirm worthwhile optimisation
* Add support for configurable cache multiple (#375)
* Mas i370 patch e (#385)
Improvement to monitoring for efficiency and improved readability of logs and stats.
As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).
No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
2017-07-31 20:20:39 +02:00
- endif .