* Address OTP24 warnings, ct and eunit paths
* Reorg to add OTP 24 support
* Update VOLUME.md
* Correct broken refs
* Update README.md
* CI on all main branches
Co-authored-by: Ulf Wiger <ulf@wiger.net>
Potentially reduce the overheads of scoring each file on every run.
The change also alters the default thresholds for compaction to favour longer runs (which will tend towards greater storage efficiency).
Initial test included for running with recallc, and also transition from retain to recalc.
Moves all logic for startup fold into leveled_bookie - avoid the Inker requiring any direct knowledge about implementation of the Penciller.
Change the penciller check so that it returns current/replaced/missing not just true/false.
Reduce unnecessary penciller checks for non-standard keys that will always be retained - and remove redunandt code.
Expand tests of retain and recover to make sure that compaction on delete is well covered.
Also move the SQN number laong during initial loads - to stop aggressive loop to find starting SQN every file.
Two reasons for logging this:
- to assist in sizing the ledger cache;
- to resolve the mystery when there appear to be no fetches from the penciller (as the penciller does not report fetches from the ledger cache)
Extracting binary from within a binary leaves a reference to the whole of the original binary.
If there are a lot of very large objects received abck toback - this can explode the amount of memory the penciller appears to hold (and gc cannot resolve this).
To dereference from the larger binary, need to do a binary copy
the Journla snapshot is not a true snapshot, in that the active file in the snapshot can still be taking appends. So when getting a snapshot it is necessary to check if folding over the snapshot that the SQN is <= JournalSQN when the snapshot is taken.
Normally consistency of the snapshot is managed as the operation depends on the penciller, and the penciller *is* a snapshot. Not in this case, as the penciller will return true on a sqn check if the pcl SQN is behind the Journal. So the Journal folder, has been given an additionla check to stop at the JournalSQN.
This is perhaps a fault in the pcl check sqn, which should only return true on an exact match? I'm nervous about changing this though, so we have a less pure fix for now.
These logs duplicate information being received from other logs, so reduced to debug.
The long running test needs to change with the LONG_RUNNING macro
Warn at startup if this ratio is high. Not sure how snapshots will perform if there are a lot of ledger cache sin the list. However, it should still work. basic_SUITE/load_count test intended to demonstrate that a large ratio is still functional
Will not lead to immediate run time changes in SST or CDB logs. These log settings will only change once the new files are re-written.
To completely change the log level - a restart of the store is necessary with new startup options.
Test fails as fetching repeated object is too slow.
```Head check took 124301 microseconds checking list of length 5000
Head check took 112286 microseconds checking list of length 5000
Head check took 1336512 microseconds checking list of length 5
2018-12-10T11:54:41.342 B0013 <0.2459.0> Long running task took 260788 microseconds with task of type pcl_head
2018-12-10T11:54:41.618 B0013 <0.2459.0> Long running task took 276508 microseconds with task of type pcl_head
2018-12-10T11:54:41.894 B0013 <0.2459.0> Long running task took 275225 microseconds with task of type pcl_head
2018-12-10T11:54:42.173 B0013 <0.2459.0> Long running task took 278836 microseconds with task of type pcl_head
2018-12-10T11:54:42.477 B0013 <0.2459.0> Long running task took 304524 microseconds with task of type pcl_head```
It taks twice as long to check for one repeated object as it does to check for 5K non-repeated objects
this was previously not na issue as leveled_codec:segment_hash/1 would handle anyhting that could be hashed. This now has to be a tuple, and one with a first element - so corrupted tuples are failing.
Add a guard chekcing for a corrupted tuple, but we only need this when doing journal compaction.
Change user_defined keys to be `retain` as a tag strategy
the skip/retain/recalc handlign was confusing. This removes the switcheroo between leveled_codec and leveled_iclerk when mkaing the decision.
Also now the building of the accumulator is handled efficiently (not using ++ on the list).
Tried to rmeove as much of ?HEAD tag handling from leveled_head - as we want leveled_head to be only concerned with the head manipulation for object tags (?STD, ?RIAK and user-defined).
To allow for extraction of metadata, and building of head responses - it should eb possible to dynamically and user-defined tags, and functions to treat them.
If no function is defined, revert to the behaviour of the ?STD tag.
More obvious how to extend the code as it is all in one module.
Also add a new field to the standard object metadata tuple that may hold in the future other object metadata base don user-defined functions.
Both log level and forced_logs. Allows for log_level to be changed at startup ad runtime. Also allow for a list of forced logs, so if log_level is set > info, individual info logs can be forced to be seen (such as to see stats logs).