Commit graph

47 commits

Author SHA1 Message Date
Martin Sumner
c642575caa
Support sub-key queries (#457)
* Support sub-key queries

Also requires a refactoring of types.

In head-only mode - the metadata in the ledger is just the value, and the value can be anything.  So metadata() definition needs to reflect that.

There are then issues with appdefined functions for extracting metadata.  In theory an appdefined function could extract some unsopprted type.  So made explicit that the appdefined function must extract std_metadata() as metadata - otherwise functionality will not work.

This means that if it is an object key, that is not a ?HEAD key, then the Metadata must be a tuple (of either Riak or Standard type).

* Fix coverage issues
2024-11-18 14:51:23 +00:00
Martin Sumner
aaeac7ba36
Mas d34 i453 eqwalizer (#454)
* Add eqwalizer and clear for codec & sst

The eqwalizer errors highlighted the need in several places for type clarification.

Within tests there are some issue where a type is assumed, and so ignore has been used to handle this rather than write more complex code to be explicit about the assumption.

The handling of arrays isn't great by eqwalizer - to be specific about the content of array causes issues when initialising an array.  Perhaps a type (map maybe) where one can be more explicit about types might be a better option (even if there is a minimal performance impact).

The use of a ?TOMB_COUNT defined option complicated the code much more with eqwalizer.  So for now, there is no developer option to disable ?TOMB_COUNT.

Test fixes required where strings have been used for buckets/keys not binaries.

The leveled_sst statem needs a different state record for starting when compared to other modes.  The state record has been divided up to reflect this, to make type management easier.  The impact on performance needs to be tested.

* Update ct tests to support binary keys/buckets only

* Eqwalizer for leveled_cdb and leveled_tictac

As array is used in leveled_tictac - there is the same issue as with leveled_sst

* Remove redundant indirection of leveled_rand

A legacy of pre-20 OTP

* Morde modules eqwalized

ebloom/log/util/monitor

* Eqwalize further modules

elp eqwalize leveled_codec; elp eqwalize leveled_sst; elp eqwalize leveled_cdb; elp eqwalize leveled_tictac; elp eqwalize leveled_log; elp eqwalize leveled_monitor; elp eqwalize leveled_head; elp eqwalize leveled_ebloom; elp eqwalize leveled_iclerk

All concurrently OK

* Refactor unit tests to use binary() no string() in key

Previously string() was allowed just to avoid having to change all these tests.  Go through the pain now, as part of eqwalizing.

* Add fixes for penciller, inker

Add a new ?IS_DEF macro to replace =/= undefined.

Now more explicit about primary, object and query keys

* Further fixes

Need to clarify functions used by runner - where keys , query keys and object keys are used

* Further eqwalisation

* Eqwalize leveled_pmanifest

Also make implementation independent of choice of dict - i.e. one can save a manifest using dict for blooms/pending_deletions and then open a manifest with code that uses a different type.  Allow for slow dict to be replaced with map.

Would not be backwards compatible though, without further thought - i.e. if you upgrade then downgrade.

Redundant code created by leveled_sst refactoring removed.

* Fix backwards compatibility issues

* Manifest Entry to belong to leveled_pmanifest

There are two manifests - leveled_pmanifest and leveled_imanifest.  Both have manifest_entry() type objects, but these types are different.  To avoid confusion don't include the pmanifest manifest_entry() within the global include file - be specific that it belongs to the leveled_pmanifest module

* Ignore elp file - large binary

* Update src/leveled_pmem.erl

Remove unnecessary empty list from type definition

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2024-11-13 13:37:13 +00:00
Martin Sumner
54e3096020
Switch to logger (#442)
* Switch to logger

Use logger rather than io:format when logging.  The ct tests have besn switched to log to file, testutil/init_per_suite/1 may offer useful guidance on configuring logger with leveled.

As all logs are produced by the leveled_log module, the MFA metadata is uninteresting for log outputs, but can be used for explicit filter controls for leveled logs.

* iolist_to_binary not unicode_binary()

logger filters will be error and be removed if the format line is a binary().  Must be either a charlist() or a unicode_binary() - so iolist_to_binary() can't be used

* Add metadata for filter

* Update test/end_to_end/tictac_SUITE.erl

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>

---------

Co-authored-by: Thomas Arts <thomas.arts@quviq.com>
2024-09-06 11:18:24 +01:00
Martin Sumner
c294570bce
Mas d31 nhskv16sst (#428)
* Add performance/profiling test

Add test to perf_SUITE to do performance tests and also profile different activities in leveled.

This can then be used to highlight functions with unexpectedly high execution times, and prove the impact of changes.

Switch between riak_ctperf and riak_fullperf to change from standard test (with profile option) to full-scale performance test

* Change shape of default perfTest

* Refactor SST

Compare and contrast profile for guess, before and after refactor:

pre

```
lists:map_1/2                                         313370     2.33    32379  [      0.10]

lists:foldl_1/3                                       956590     4.81    66992  [      0.07]

leveled_sst:'-expand_list_by_pointer/5-fun-0-'/4      925020     6.13    85318  [      0.09]

erlang:binary_to_term/1                                 3881     8.55   119012  [     30.67]

erlang:'++'/2                                         974322    11.55   160724  [      0.16]

lists:member/2                                       4000180    15.00   208697  [      0.05]

leveled_sst:find_pos/4                               4029220    21.01   292347  [      0.07]

leveled_sst:member_check/2                           4000000    21.17   294601  [      0.07]

--------------------------------------------------  --------  -------  -------  [----------]

Total:                                              16894665  100.00%  1391759  [      0.08]
```

post

```
lists:map_1/2                                         63800     0.79    6795  [      0.11]

erlang:term_to_binary/1                               15726     0.81    6950  [      0.44]

lists:keyfind/3                                      180967     0.92    7884  [      0.04]

erlang:spawn_link/3                                   15717     1.08    9327  [      0.59]

leveled_sst:'-read_slots/5-fun-1-'/8                  31270     1.15    9895  [      0.32]

gen:do_call/4                                          7881     1.31   11243  [      1.43]

leveled_penciller:find_nextkey/8                     180936     2.01   17293  [      0.10]

prim_file:pread_nif/3                                 15717     3.89   33437  [      2.13]

leveled_sst:find_pos/4                              4028940    17.85  153554  [      0.04]

erlang:binary_to_term/1                               15717    51.97  447048  [     28.44]

--------------------------------------------------  -------  -------  ------  [----------]

Total:                                              6704100  100.00%  860233  [      0.13]

```

* Update leveled_penciller.erl

* Mas d31 nhskv16sstpcl (#426)

Performance updates to leveled:

- Refactoring of pointer expansion when fetching from leveled_sst files to avoid expensive list concatenation.
- Refactoring of leveled_ebloom to make more flexible, reduce code, and improve check time.
- Refactoring of querying within leveled_sst to reduce the number of blocks that need to be de-serialised per query.
- Refactoring of the leveled_penciller's query key comparator, to make use of maps and simplify the filtering.
- General speed-up of frequently called functions.
2024-01-22 21:22:54 +00:00
Martin Sumner
d544db5461
Mas d31 i413 (#415)
* Allow snapshots to be reused in queries

Allow for a full bookie snapshot to be re-used for multiple queries, not just KV fetches.

* Reduce log noise

The internal dummy tag is expected so should not prompt a log on reload

* Snapshot should have same status of active db

wrt head_only and head_lookup

* Allow logging to specified on snapshots

* Shutdown snapshot bookie is primary goes down

Inker and Penciller already will shut down based on `erlang:monitor/2`

* Review feedback

Formatting and code readability fixes
2023-11-08 09:18:01 +00:00
Martin Sumner
b96518c32a
Use backwards compatible term_to_binary (#408)
* Use backwards compatible term_to_binary

So that where we have hashed term_to_binary output in OTP25 or earlier, that has will be matched in OTP 26.

* Test reliability

If all keys are put in order, the max_slots may not be used, as the driver at L0 is penciller cache size, and merge to new files (managed by the parameter) only occurs when there are overlapping files the level below
2023-10-05 10:33:20 +01:00
Martin Sumner
7509191466
Initial support for OTP 26 (#395)
* Initial support for OTP 26

* Extend timeout in test
2023-03-14 16:27:08 +00:00
Martin Sumner
a033e280e6
Develop 3.1 d30update (#386)
* Mas i370 patch d (#383)

* Refactor penciller memory

In high-volume tests on large key-count clusters, so significant variation in the P0031 time has been seen:

TimeBucket	PatchA
a.0ms_to_1ms	18554
b.1ms_to_2ms	51778
c.2ms_to_3ms	696
d.3ms_to_5ms	220
e.5ms_to_8ms	59
f.8ms_to_13ms	40
g.13ms_to_21ms	364
h.21ms_to_34ms	277
i.34ms_to_55ms	34
j.55ms_to_89ms	17
k.89ms_to_144ms	21
l.144ms_to_233ms	31
m.233ms_to_377ms	45
n.377ms_to_610ms	52
o.610ms_to_987ms	59
p.987ms_to_1597ms	55
q.1597ms_to_2684ms	54
r.2684ms_to_4281ms	29
s.4281ms_to_6965ms	7
t.6295ms_to_11246ms	1

It is unclear why this varies so much.  The time to add to the cache appears to be minimal (but perhaps there is an issue with timing points in the code), whereas the time to add to the index is much more significant and variable.  There is also variable time when the memory is rolled (although the actual activity here appears to be minimal.

The refactoring here is two-fold:

- tidy and simplify by keeping LoopState managed within handle_call, and add more helpful dialyzer specs;

- change the update to the index to be a simple extension of a list, rather than any conversion.

This alternative version of the pmem index in unit test is orders of magnitude faster to add - and is the same order of magnitude to check.  Anticipation is that it may be more efficient in terms of memory changes.

* Compress SST index

Reduces the size of the leveled_sst index with two changes:

1 - Where there is a common prefix of tuple elements (e.g. Bucket) across the whole leveled_sst file - only the non-common part is indexed, and a function is used to compare.

2 - There is less "indexing" of the index i.e. only 1 in 16 keys are passed into the gb_trees part instead of 1 in 4

* Immediate hibernate

Reasons for delay in hibernate were not clear.

Straight after creation the process will not be in receipt of messages (must wait for the manifest to be updated), so better to hibernate now.  This also means the log PC023 provides more accurate information.

* Refactor BIC

This patch avoids the following:

- repeated replacement of the same element in the BIC (via get_kvrange), by checking presence via GET before sing SET

- Stops re-reading of all elements to discover high modified date

Also there appears to have been a bug where a missing HMD for the file is required to add to the cache.  However, now the cache may be erased without erasing the HMD.  This means that the cache can never be rebuilt

* Use correct size in test results

erts_debug:flat_size/1 returns size in words (i.e. 8 bytes on 64-bit CPU) not bytes

* Don't change summary record

As it is persisted as part of the file write, any change to the summary record cannot be rolled back

* Clerk to prompt L0 write

Simplifies the logic if the clerk request work for the penciller prompts L0 writes as well as Manifest changes.

The advantage now is that if the penciller memory is full, and PUT load stops, the clerk should still be able to prompt persistence.  the penciller can therefore make use of dead time this way

* Add push on journal compact

If there has been a backlog, followed by a quiet period - there may be a large ledger cache left unpushed.  Journal compaction events are about once per hour, so the performance overhead of a false push should be minimal, with the advantage of clearing any backlog before load starts again.

This is only relevant to riak users with very off/full batch type workloads.

* Extend tests

To more consistently trigger all overload scenarios

* Fix range keys smaller than prefix

Can't make end key an empty binary  in this case, as it may be bigger than any keys within the range, but will appear to be smaller.

Unit tests and ct tests added to expose the potential issue

* Tidy-up

- Remove penciller logs which are no longer called
- Get pclerk to only wait MIN_TIMEOUT after doing work, in case there is a backlog
- Remove update_levelzero_cache function as it is unique to handle_call of push_mem, and simple enough to be inline
- Alight testutil slow offer with standard slow offer used

* Tidy-up

Remove pre-otp20 references.

Reinstate the check that the starting pid is still active, this was added to tidy up shutdown.

Resolve failure to run on otp20 due to `-if` sttaement

* Tidy up

Using null rather then {null, Key} is potentially clearer as it is not a concern what they Key is in this case, and removes a comparison step from the leveled_codec:endkey_passed/2 function.

There were issues with coverage in eunit tests as the leveled_pclerk shut down.  This prompted a general tidy of leveled_pclerk (remove passing of LoopState into internal functions, and add dialyzer specs.

* Remove R16 relic

* Further testing another issue

The StartKey must always be less than or equal to the prefix when the first N characters are stripped,  but this is not true of the EndKey (for the query) which does not have to be between the FirstKey and the LastKey.

If the EndKey query does not match it must be greater than the Prefix (as otherwise it would not have been greater than the FirstKey - so set to null.

* Fix unit test

Unit test had a typo - and result interpretation had a misunderstanding.

* Code and spec tidy

Also look to the cover the situation when the FirstKey is the same as the Prefix with tests.

This is, in theory, not an issue as it is the EndKey for each sublist which is indexed in leveled_tree.  However, guard against it mapping to null here, just in case there are dangers lurking (note that tests will still pass without `M > N` guard in place.

* Hibernate on BIC complete

There are three situations when the BIC becomes complete:

- In a file created as part of a merge the BIS is learned in the merge
- After startup, files below L1 learn the block cache through reads that happen to read the block, eventually the while cache will be read, unless...
- Either before/after the cache is complete, it can get whiped by a timeout after a get_sqn request (e.g. as prompted by a journal compaction) ... it will then be re-filled of the back of get/get-range requests.

In all these situations we want to hibernate after the BIC is fill - to reflect the fact that the LoopState should now be relatively stable, so it is a good point to GC and rationalise location of data.

Previously on the the first base was covered.  Now all three are covered through the bic_complete message.

* Test all index keys have same term

This works functionally, but is not optimised (the term is replicated in the index)

* Summaries with same index term

If the summary index all have the same index term - only the object keys need to be indexes

* Simplify case statements

We either match the pattern of <<Prefix:N, Suffix>> or the answer should be null

* OK for M == N

If M = N for the first key, it will have a suffix of <<>>.  This will match (as expected) a query Start Key of the sam size, and be smaller than any query Start Key that has the same prefix.

If the query Start Key does not match the prefix - it will be null - as it must be smaller than the Prefix (as other wise the query Start Key would be bigger than the Last Key).

The constraint of M > N was introduced before the *_prefix_filter functions were checking the prefix, to avoid issues.  Now the prefix is being checked, then M == N is ok.

* Simplify

Correct the test to use a binary field in the range.

To avoid further issue, only apply filter when everything is a binary() type.

* Add test for head_only mode

When leveled is used as a tictacaae key store (in parallel mode), the keys will be head_only entries.  Double check they are handled as expected like object keys

* Revert previous change - must support typed buckets

Add assertion to confirm worthwhile optimisation

* Add support for configurable cache multiple (#375)

* Mas i370 patch e (#385)

Improvement to monitoring for efficiency and improved readability of logs and stats.

As part of this, where possible, tried to avoid updating loop state on READ messages in leveled processes (as was the case when tracking stats within each process).

No performance benefits found with change, but improved stats has helped discover other potential gains.
2022-12-18 20:18:03 +00:00
Martin Sumner
2cb87f3a6e
Develop 3.1 eqc (#339)
* Add EQC profile

Don't run EQC tests, unless specifically requested e.g. `./rebar3 as eqc eunit --module=leveled_eqc`

* Remove eqc_cover parse_transform

Causes a compilation issue with the assertException in leveled_runner

* Allow EQC test to compile

EQC only works on OTP 22 for now, but other tests should still work on OTP 22 and OTP 24

* Add more complex statem based eqc test

* Add check for eqc profile
2021-05-26 17:40:09 +01:00
Martin Sumner
eeeb7498c0 Review feedback
Also add types to try and help avoid further confusion
2020-12-04 19:40:28 +00:00
Martin Sumner
108527a8d9 is_active check within fold
Not within the fold fun of the leveled_runner.

This should avoid constantly having to re-merge and filter the penciller memory when running list_buckets and hitting inactive keys
2020-12-04 12:49:17 +00:00
Martin Sumner
156e7b064d Compaction, retain and recovery
Change the penciller check so that it returns current/replaced/missing not just true/false.

Reduce unnecessary penciller checks for non-standard keys that will always be retained - and remove redunandt code.

Expand tests of retain and recover to make sure that compaction on delete is well covered.

Also move the SQN number laong during initial loads - to stop aggressive loop to find starting SQN every file.
2020-03-09 15:12:48 +00:00
Martin Sumner
db1486fa36 Check SQN order fold does not fold beyond end of snapshot
the Journla snapshot is not a true snapshot, in that the active file in the snapshot can still be taking appends.  So when getting a snapshot it is necessary to check if folding over the snapshot that the SQN is <= JournalSQN when the snapshot is taken.

Normally consistency of the snapshot is managed as the operation depends on the penciller, and the penciller *is* a snapshot.  Not in this case, as the penciller will return true on a sqn check if the pcl SQN is behind the Journal.  So the Journal folder, has been given an additionla check to stop at the JournalSQN.

This is perhaps a fault in the pcl check sqn, which should only return true on an exact match?  I'm nervous about changing this though, so we have a less pure fix for now.
2019-02-14 21:14:11 +00:00
Martin Sumner
8e687ee7c8 Add user-defined functions
To allow for extraction of metadata, and building of head responses - it should eb possible to dynamically and user-defined tags, and functions to treat them.

If no function is defined, revert to the behaviour of the ?STD tag.
2018-12-06 21:00:59 +00:00
Martin Sumner
881b93229b Isolate better changes needed to support changes to metadata extraction
More obvious how to extend the code as it is all in one module.

Also add a new field to the standard object metadata tuple that may hold in the future other object metadata base don user-defined functions.
2018-12-06 15:31:11 +00:00
Martin Sumner
a9aa23bc9c Bucket list
update the docs to advertise throw capability.  Test it for bucket list (and fix ordering of bucket lists)
2018-11-23 18:56:30 +00:00
Martin Sumner
ef2a8c62af Add capability to exit a head or object fold with a throw
This allows for all fold functions to throw an exception to exit out of a fold with all dependencies still closed down as expected.

This was previously available for key folds, which was necessary for the folds to work in Riak (as max_results in index queries depends one xiting the fold with an exception).  This change now adds a ct test, and adds support for head folds, object folds (key order) and object folds (sqn order)
2018-11-23 16:00:11 +00:00
Martin Sumner
174a40aab2 Tidy up unexported types
also re:mp may not be exported in R16
2018-11-05 16:02:19 +00:00
Martin Sumner
2eec8a5378 MaxCount monitoring and responding
Stop issue of {no_more_keys, Acc} being passed on fold over list of ranges to next range (and blowing up)
2018-11-01 23:40:28 +00:00
Martin Sumner
71fa1447e0 Allow for all keys head folds to used modifed range
This helps with kv_index_tictcatree with the leveled_so backend.  Now this cna do folds over ranges of keys with modified filters (as folds over ranges of keys must go over lal keys if the backend is segment_ordered)
2018-11-01 17:30:18 +00:00
Martin Sumner
aaccd09a98 Allow for setting max_keys to wrap Acc
Acc in response is now of form {Reason, Acc} not just Acc so that the application can understand the reason for the results ending - and take appropriate action (e.g. restart again from the LastKey to return more results).
2018-10-31 14:22:28 +00:00
Martin Sumner
11627bbdd9 Extend API
To support max_keys and the last modified date range.

This applies the last modified date check on all ledger folds.  This is hard to avoid, but ultimately a very low cost.

The limit on the number of heads to fold, is the limit based on passing to the accumulator - not on the limit being added to the accumulator.  So if the FoldFun perfoms a filter (e.g. for the preflist), then those filtered results will still count towards the maximum.

There needs to be someway at the end of signalling from the fold if the outcome was  or was not 'constrained' by max_keys - as the fold cannot simply tel by lenght checking the outcome.

Note this is used rather than length checking the buffer and throwing a 'stop_fold' message when the limit is reached.  The choice is made for simplicity, and ease of testing.  The throw mechanism is necessary if there is a need to stop parallel folds across the the cluster - but in this case the node_worker_pool will be used.
2018-10-31 00:09:24 +00:00
Martin Sumner
baa4466923 Remove knowledge of tuple length from ledger value
Nothing should now care about the current tuple length - and hence the tuple length may be increased (for example to add a max_mod_date)
2018-10-29 20:24:54 +00:00
Martin Sumner
f4b365438c Further comments in API docs 2018-09-24 20:43:21 +01:00
Martin Sumner
bed155761b Added comments
This is still a clumsy feature, in terms of implementation.

Is the fact that some folds handle a throw, and some don't an issue?
2018-09-24 20:05:48 +01:00
Martin Sumner
a9b097e392 Add a wrapper to fold_keys queries
Queries that in Riak will be based on fold_keys need to be able to catch throws, and re-throw them to be detected by the worker (whilst still clearing up the snapshot)
2018-09-24 19:54:28 +01:00
Martin Sumner
1a3d3daa89 Add regex support to $key index
Regex to be applied to key only
2018-09-21 12:04:32 +01:00
Russell Brown
5a95e82af0 Callers of bucket list expect the traversal to be in order
Due to the internal fold over buckets returning an un-reversed
accumulator, the API bucketlist code caller's fold fun traversed the
bucket list in reverse order. This lead to some inconsistencies when
comparing a buckelist of all buckets, vs, first bucket only. i.e. the
'first' bucket passed to the foldfun was in fact the last bucket read
from the ledger.
2018-09-18 15:40:44 +01:00
Martin Sumner
bde188e691 Constrain keys
Rather than supporting any() - constrain at least to binary()/integer() or string().
2018-09-01 12:10:56 +01:00
Martin Sumner
50967438d3 Switch from binary_bucketlist
Allow for bucket listing of non-binary buckets (integer buckets, buckets with ascii strings)
2018-09-01 10:39:23 +01:00
Martin Sumner
41fb83abd1 Add tests for is_empty
Where keys are strings or integers, and where subkeys are involved
2018-08-31 15:29:38 +01:00
Martin Sumner
a14941a122 Fix unexported types
file:location not exported?
2018-06-04 10:57:37 +01:00
Martin Sumner
06fd9856b5 Add debug log 2018-05-16 10:34:47 +01:00
Martin Sumner
6a20b2ce66 Use leveled_codec types
... and exporting them.

Previously types wer enot exported, and it appears dialyzer treated tham as any() when they were unexported types ??!!??
2018-05-04 15:24:08 +01:00
Martin Sumner
f88f511df3 leveled_codec spec/doc
Try and make this code a little bit more organised andd easier to follow
2018-05-03 17:18:13 +01:00
Russell Brown
92582de4dd WIP: changes for backend_eqc test
Enable support for $key and $bucket index
2018-04-12 10:06:29 +01:00
Martin Sumner
cda412508a IsEmpty check
Previously there was no is_empty check, and there was a workaround using binary_bucketlist.  But what if there were many buckets - this is a slow seek (using get next key over and over).

Instead have a proper is_empty check.
2018-03-21 15:31:00 +00:00
Martin Sumner
d21d18dd82 Remove rogue log 2018-03-01 23:37:05 +00:00
Martin Sumner
861aa5a7db Support multi-query fold
Allow a single snapshot to run query over multiple ranges.   Used initially to fold over multiple buckets.
2018-03-01 23:19:52 +00:00
Martin Sumner
2b6281b2b5 Initial head_only features
Initial commit to add head_only mode to leveled.  This allows leveled to receive batches of object changes, but where those objects exist only in the Penciller's Ledger (once they have been persisted within the Ledger).

The aim is to reduce significantly the cost of compaction.  Also, the objects ar enot directly accessible (they can only be accessed through folds).  Again this makes life easier during merging in the LSM trees (as no bloom filters have to be created).
2018-02-15 16:14:46 +00:00
Martin Sumner
0e071d078e fold_objects in SQN order
This adds a test that fold_objects works in SQN order
2017-11-17 18:30:51 +00:00
Martin Sumner
50c81d0626 Make ink fold more generic
Also makes the fold_from_sequence loop much easier to follow
2017-11-17 14:54:53 +00:00
Martin Sumner
c8ad39b33b foldheads_bybucket adds segment list support
Accelerate queries for foldheads_bybucket as well
2017-11-01 22:00:12 +00:00
Martin Sumner
b141dd199c Allow for segment-acceleration of folds
Initially with basic tests.  If the SlotIndex has been cached, we can now use the slot index as it is based on the Segment hash algortihm.

This looks like it should lead to an order of magnitude improvement in querying for keys/clocks by segment ID.

This also required a slight tweak to the penciller keyfolder.  It now caches the next answer from the SSTiter, rather than restart the iterator.   When the IMMiter has many more entries than the SSTiter (as the sSTiter is being filtered but not the IMMiter) this could lead to lots of repeated folding.
2017-10-31 23:28:35 +00:00
Martin Sumner
6bb7ceef0c Attempt to standardise on segment hashes
To allow for the segment has that accelerates queries to be re-used in tictac tree related queries.
2017-10-30 13:57:41 +00:00
Martin Sumner
84239955ed Clarify wording 2017-10-17 22:31:11 +01:00
Martin Sumner
bfaed921e6 Split code for folders - introduce runner actor
Introduce a dedicated module for all the different fold types.  Also simplify the list of folders by deprecating those folds that should eb achieveable by fold_heads/fold_objects type folds but with smarter functions.

Makes sure that the fold functiosn also have better spec coverage, and are dialyzer checked.
2017-10-17 20:39:11 +01:00