* Don't use fetch_cache below the page_cache level
* Don't time fetches due to SQN checks
SQN checks are all background processes
* Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Add catch for hibernate timeout
* Scale cache_size with level
Based on volume testing. Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels). The cache at lower levels are proportionally much less efficient. so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.
* OTP 24 fix to cherry-pick
* Make minimal change to previous setup
Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.
The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.
* Introduce new types
* Mas i370 d30 sstmemory (#374)
* Don't time fetches due to SQN checks
SQN checks are all background processes
* Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
* Add catch for hibernate timeout
* Scale cache_size with level
Based on volume testing. Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels). The cache at lower levels are proportionally much less efficient. so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.
* Make minimal change to previous setup
Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.
The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.
* Introduce new types
* More memory management
Clear blockindex_cache on timeout, and manually GC on pclerk after work.
* Add further garbage collection prompt
After fetching level zero, significant change in references in the penciller memory, so prompt a garbage_collect() at this point.
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction. Use this to trigger hibernation where SST files stay quiet after the compaction check.
Initial test included for running with recallc, and also transition from retain to recalc.
Moves all logic for startup fold into leveled_bookie - avoid the Inker requiring any direct knowledge about implementation of the Penciller.
Change the penciller check so that it returns current/replaced/missing not just true/false.
Reduce unnecessary penciller checks for non-standard keys that will always be retained - and remove redunandt code.
Expand tests of retain and recover to make sure that compaction on delete is well covered.
Also move the SQN number laong during initial loads - to stop aggressive loop to find starting SQN every file.
this was previously not na issue as leveled_codec:segment_hash/1 would handle anyhting that could be hashed. This now has to be a tuple, and one with a first element - so corrupted tuples are failing.
Add a guard chekcing for a corrupted tuple, but we only need this when doing journal compaction.
Change user_defined keys to be `retain` as a tag strategy
the skip/retain/recalc handlign was confusing. This removes the switcheroo between leveled_codec and leveled_iclerk when mkaing the decision.
Also now the building of the accumulator is handled efficiently (not using ++ on the list).
Tried to rmeove as much of ?HEAD tag handling from leveled_head - as we want leveled_head to be only concerned with the head manipulation for object tags (?STD, ?RIAK and user-defined).
To allow for extraction of metadata, and building of head responses - it should eb possible to dynamically and user-defined tags, and functions to treat them.
If no function is defined, revert to the behaviour of the ?STD tag.
More obvious how to extend the code as it is all in one module.
Also add a new field to the standard object metadata tuple that may hold in the future other object metadata base don user-defined functions.
Adds support with test for tuplebuckets in Riak keys.
This exposed that there was no filter using the seglist on the in-mmemory keys. This means that if there is no filter applied in the fold_function, many false positives may emerge.
This is probably not a big performance benefit (and indeed for performance it may be better to apply during the leveled_pmem:merge_trees).
Some thought still required as to what is more likely to contribute to future bugs: an extra location using the hash matching found in leveled_sst, or the extra results in the query.
To support max_keys and the last modified date range.
This applies the last modified date check on all ledger folds. This is hard to avoid, but ultimately a very low cost.
The limit on the number of heads to fold, is the limit based on passing to the accumulator - not on the limit being added to the accumulator. So if the FoldFun perfoms a filter (e.g. for the preflist), then those filtered results will still count towards the maximum.
There needs to be someway at the end of signalling from the fold if the outcome was or was not 'constrained' by max_keys - as the fold cannot simply tel by lenght checking the outcome.
Note this is used rather than length checking the buffer and throwing a 'stop_fold' message when the limit is reached. The choice is made for simplicity, and ease of testing. The throw mechanism is necessary if there is a need to stop parallel folds across the the cluster - but in this case the node_worker_pool will be used.
There were some bad specs in '|' OR'd specs. These were being falsely ignored in dialyzer until https://github.com/erlang/otp/pull/1722.
Running on OTP21 exposed these incomplete specs.
the concept of the segment hash belongs to the leveled_tictac module, not the codec.
Previously the alignment of tictac and store was accidental, this change makes it explicit.
Originally had disabled the ability to lookup individual values when running in head_only mode. This is a saving of about 11% at PUT time (about 3 microseconds per PUT) on a macbook.
Not sure this saving is sufficient enought to justify the extra work if this is used as an AAE Keystore with Bitcask and LWW (when we need to lookup the current value before adjusting).
So reverted to re-adding support for HEAD requests with these keys.
Initial commit to add head_only mode to leveled. This allows leveled to receive batches of object changes, but where those objects exist only in the Penciller's Ledger (once they have been persisted within the Ledger).
The aim is to reduce significantly the cost of compaction. Also, the objects ar enot directly accessible (they can only be accessed through folds). Again this makes life easier during merging in the LSM trees (as no bloom filters have to be created).