Commit graph

123 commits

Author SHA1 Message Date
Martin Sumner
f8485210ed
Mas i370 d31 sstmemory (#373)
* Don't use fetch_cache below the page_cache level

* Don't time fetches due to SQN checks

SQN checks are all background processes

* Hibernate on SQN check

SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.

* Add catch for hibernate timeout

* Scale cache_size with level

Based on volume testing.  Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels).  The cache at lower levels are proportionally much less efficient.  so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.

* OTP 24 fix to cherry-pick

* Make minimal change to previous setup

Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.

The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.

* Introduce new types

* Mas i370 d30 sstmemory (#374)


* Don't time fetches due to SQN checks

SQN checks are all background processes

* Hibernate on SQN check

SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.

* Add catch for hibernate timeout

* Scale cache_size with level

Based on volume testing.  Relatively speaking, far higher value to be gained from caches at higher levels (lower numbered levels).  The cache at lower levels are proportionally much less efficient.  so cache more at higher levels, where there is value, and less at lower levels where there is more cost relative to value.

* Make minimal change to previous setup

Making significant change appears to not have had the expected positive improvement - so a more minimal change is proposed.

The assumption is that the cache only really gets used for double reads in the write path (e.g. where the application reads before a write) - and so a large cache make minimal difference, but no cache still has a downside.

* Introduce new types

* More memory management

Clear blockindex_cache on timeout, and manually GC on pclerk after work.

* Add further garbage collection prompt

After fetching level zero, significant change in references in the penciller memory, so prompt a garbage_collect() at this point.
2022-04-23 13:38:20 +01:00
Martin Sumner
2e0b20a071 Revert "Hibernate on SQN check"
This reverts commit eedd09a23d.
2022-03-11 11:06:51 +00:00
Martin Sumner
eedd09a23d Hibernate on SQN check
SQN check in the penciller is used for journal (all object) folds, but mainly for journal compaction.  Use this to trigger hibernation where SST files stay quiet after the compaction check.
2022-03-11 08:49:56 +00:00
Martin Sumner
e175948378 Remove references ot 'skip' strategy
Now called `recovr`
2020-03-26 14:25:09 +00:00
Martin Sumner
9d92ca0773 Add tests for appDefined functions 2020-03-16 12:51:14 +00:00
Martin Sumner
694d2c39f8 Support for recalc
Initial test included for running with recallc, and also transition from retain to recalc.

Moves all logic for startup fold into leveled_bookie - avoid the Inker requiring any direct knowledge about implementation of the Penciller.
2020-03-15 22:14:42 +00:00
Martin Sumner
156e7b064d Compaction, retain and recovery
Change the penciller check so that it returns current/replaced/missing not just true/false.

Reduce unnecessary penciller checks for non-standard keys that will always be retained - and remove redunandt code.

Expand tests of retain and recover to make sure that compaction on delete is well covered.

Also move the SQN number laong during initial loads - to stop aggressive loop to find starting SQN every file.
2020-03-09 15:12:48 +00:00
Martin Sumner
22e732841c Compaction of already compacted journals
Ensure that journals with a large volume of key deltas do not erroneously get repeatedly compacted.
2019-07-24 18:03:22 +01:00
Martin Sumner
714e128df8 Tidy up protecting against corrupt Keys
this was previously not na issue as leveled_codec:segment_hash/1 would handle anyhting that could be hashed.  This now has to be a tuple, and one with a first element - so corrupted tuples are failing.

Add a guard chekcing for a corrupted tuple, but we only need this when doing journal compaction.

Change user_defined keys to be `retain` as a tag strategy
2018-12-07 09:07:22 +00:00
Martin Sumner
3ff51c000c Typo 2018-12-06 22:55:00 +00:00
Martin Sumner
e0352414f2 iClerk refactor
the skip/retain/recalc handlign was confusing.  This removes the switcheroo between leveled_codec and leveled_iclerk when mkaing the decision.

Also now the building of the accumulator is handled efficiently (not using ++ on the list).

Tried to rmeove as much of ?HEAD tag handling from leveled_head - as we want leveled_head to be only concerned with the head manipulation for object tags (?STD, ?RIAK and user-defined).
2018-12-06 22:45:05 +00:00
Martin Sumner
8e687ee7c8 Add user-defined functions
To allow for extraction of metadata, and building of head responses - it should eb possible to dynamically and user-defined tags, and functions to treat them.

If no function is defined, revert to the behaviour of the ?STD tag.
2018-12-06 21:00:59 +00:00
Martin Sumner
881b93229b Isolate better changes needed to support changes to metadata extraction
More obvious how to extend the code as it is all in one module.

Also add a new field to the standard object metadata tuple that may hold in the future other object metadata base don user-defined functions.
2018-12-06 15:31:11 +00:00
Martin Sumner
a7773b148d Split hash - seperate key has for bxor with value 2018-11-09 14:51:38 +00:00
Martin Sumner
174a40aab2 Tidy up unexported types
also re:mp may not be exported in R16
2018-11-05 16:02:19 +00:00
Martin Sumner
e9fb893ea0 Check segment is as expected with tuplebuckets
In head_only mode
2018-11-05 10:31:15 +00:00
Martin Sumner
e72a946f43 TupleBuckets in Riak objects
Adds support with test for tuplebuckets in Riak keys.

This exposed that there was no filter using the seglist on the in-mmemory keys.  This means that if there is no filter applied in the fold_function, many false positives may emerge.

This is probably not a big performance benefit (and indeed for performance it may be better to apply during the leveled_pmem:merge_trees).

Some thought still required as to what is more likely to contribute to future bugs: an extra location using the hash matching found in leveled_sst, or the extra results in the query.
2018-11-05 01:21:08 +00:00
Martin Sumner
aa123a80a7 Allow for backwards/forwards compatibility in specs 2018-11-01 12:40:24 +00:00
Martin Sumner
f77dc8c3a5 Add object_spec type
Initial refactor to prepare to allow for a new version object_spec type that will support LMD being promoted as an accessible item.
2018-11-01 10:41:46 +00:00
Martin Sumner
142e3a17bb Add in modifictaion date to v2 value
And restrict it to 32 bits - as 80 years should be enough.
2018-10-31 11:44:46 +00:00
Martin Sumner
11627bbdd9 Extend API
To support max_keys and the last modified date range.

This applies the last modified date check on all ledger folds.  This is hard to avoid, but ultimately a very low cost.

The limit on the number of heads to fold, is the limit based on passing to the accumulator - not on the limit being added to the accumulator.  So if the FoldFun perfoms a filter (e.g. for the preflist), then those filtered results will still count towards the maximum.

There needs to be someway at the end of signalling from the fold if the outcome was  or was not 'constrained' by max_keys - as the fold cannot simply tel by lenght checking the outcome.

Note this is used rather than length checking the buffer and throwing a 'stop_fold' message when the limit is reached.  The choice is made for simplicity, and ease of testing.  The throw mechanism is necessary if there is a need to stop parallel folds across the the cluster - but in this case the node_worker_pool will be used.
2018-10-31 00:09:24 +00:00
Martin Sumner
8ba28700eb Start adding in last_moified dates
With updated specs
2018-10-29 21:50:32 +00:00
Martin Sumner
14fd67e535 Add specs and comments and split function
Need to change this, so refactor and make neater in preparation
2018-10-29 21:16:38 +00:00
Martin Sumner
baa4466923 Remove knowledge of tuple length from ledger value
Nothing should now care about the current tuple length - and hence the tuple length may be increased (for example to add a max_mod_date)
2018-10-29 20:24:54 +00:00
Martin Sumner
671b6e7f99 Strip ALL_BUCKET - only used in AAE 2018-10-29 16:56:58 +00:00
Martin Sumner
2e2c35fe1b Extract deprecated recent_aae
Ready to add other forms of last modified filtering
2018-10-29 15:49:50 +00:00
Martin Sumner
0fb35e658f Add support for buckets that are tuples
Only {binary(), binary()} tuples
2018-09-27 09:34:40 +01:00
Martin Sumner
c64dc1df0d Change key() definition to not allow integer keys 2018-09-03 12:28:31 +01:00
Martin Sumner
41fb83abd1 Add tests for is_empty
Where keys are strings or integers, and where subkeys are involved
2018-08-31 15:29:38 +01:00
Martin Sumner
0dda129d3e Fix bad specs
There were some bad specs in '|' OR'd specs.  These were being falsely ignored in dialyzer until https://github.com/erlang/otp/pull/1722.

Running on OTP21 exposed these incomplete specs.
2018-06-21 15:46:42 +01:00
Martin Sumner
a14941a122 Fix unexported types
file:location not exported?
2018-06-04 10:57:37 +01:00
Martin Sumner
6a20b2ce66 Use leveled_codec types
... and exporting them.

Previously types wer enot exported, and it appears dialyzer treated tham as any() when they were unexported types ??!!??
2018-05-04 15:24:08 +01:00
Martin Sumner
2063cacd8f More spec/doc work in leveled_codec
Note that at some stage KeyChanges got overloaded to mean {KeyChanges, TTL}, and the spec now tries to make this a bit clearer
2018-05-04 11:19:37 +01:00
Martin Sumner
aa34ffda5b Crash not skip on corrupted key 2018-05-03 20:14:36 +01:00
Martin Sumner
2cd20fcb47 Missed generate_uuid reference 2018-05-03 18:26:02 +01:00
Martin Sumner
f88f511df3 leveled_codec spec/doc
Try and make this code a little bit more organised andd easier to follow
2018-05-03 17:18:13 +01:00
Martin Sumner
c1cd00b498 Allow ignore non-binary subkey for hash
This allows the subkey to be an integer, that will be gnored for hashing purposes
2018-03-22 22:07:24 +00:00
Martin Sumner
b81caf7dee segment_hash -> tictac
the concept of the segment hash belongs to the leveled_tictac module, not the codec.

Previously the alignment of tictac and store was accidental, this change makes it explicit.
2018-03-22 19:03:52 +00:00
Martin Sumner
6ce903ad2b Change segment_hahs for HEAD
Needs toaling with ?RIAK tag, and so that AAE key-ordered and segment-ordered stores also agree on definition of Segment ID
2018-03-22 17:12:58 +00:00
Martin Sumner
fa532fbd27 Tidy set_status 2018-02-16 20:56:12 +00:00
Martin Sumner
090e414b23 Coverage issues
Not making proxy object so get_size not required.

Extend tests to improve coverage
2018-02-16 20:27:49 +00:00
Martin Sumner
910ccb6072 Add lookup support in head_only mode
Originally had disabled the ability to lookup individual values when running in head_only mode.  This is a saving of about 11% at PUT time (about 3 microseconds  per PUT) on a macbook.

Not sure this saving is sufficient enought to justify the extra work if this is used as an AAE Keystore with Bitcask and LWW (when we need to lookup the current value before adjusting).

So reverted to re-adding support for HEAD requests with these keys.
2018-02-16 14:16:28 +00:00
Martin Sumner
2b6281b2b5 Initial head_only features
Initial commit to add head_only mode to leveled.  This allows leveled to receive batches of object changes, but where those objects exist only in the Penciller's Ledger (once they have been persisted within the Ledger).

The aim is to reduce significantly the cost of compaction.  Also, the objects ar enot directly accessible (they can only be accessed through folds).  Again this makes life easier during merging in the LSM trees (as no bloom filters have to be created).
2018-02-15 16:14:46 +00:00
Martin Sumner
50c81d0626 Make ink fold more generic
Also makes the fold_from_sequence loop much easier to follow
2017-11-17 14:54:53 +00:00
Martin Sumner
0c498f293d Test out-of-date update
Check no recent_aae index is created
2017-11-10 10:08:30 +00:00
Martin Sumner
bea094aaf5 no non-binary objects in inker 2017-11-07 13:43:29 +00:00
Martin Sumner
332286f35c From inker kv - value cannot be a term 2017-11-07 13:42:12 +00:00
Martin Sumner
8f27b3b628 Merge branch 'master' into mas-aae-segementfoldplus 2017-11-07 11:22:56 +00:00
Martin Sumner
f358bd7622 Switch to using passed in compression method for maybe_compress
When the compaction discovers compression is required it will used the passed in method at startup - not the method which had been previously defined.
2017-11-06 21:16:46 +00:00
Martin Sumner
1d475235d1 Improve test coverage
Make compress on receipt/compaction configurable
2017-11-06 18:44:08 +00:00