commit
4099f05616
24 changed files with 1499 additions and 578 deletions
96
README.md
96
README.md
|
@ -1,2 +1,94 @@
|
||||||
# eleveleddb
|
# LeveledDB
|
||||||
Experiment for learning more about LSM trees
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
LeveledDB is an experimental Key/Value store based on the Log-Structured Merge Tree concept, written in Erlang. It is not currently suitable for production systems, but is intended to provide a proof of concept of the potential benefits of different design trade-offs in LSM Trees.
|
||||||
|
|
||||||
|
The specific goals of this implementation are:
|
||||||
|
|
||||||
|
- Be simple and straight-forward to understand and extend
|
||||||
|
- Support objects which have keys, secondary indexes, a value and potentially some metadata which provides a useful subset of the information in the value
|
||||||
|
- Support a HEAD request which has a lower cost than a GET request, so that requests requiring access only to metadata can gain efficiency by saving the full cost of returning the entire value
|
||||||
|
- Tries to reduce write amplification when compared with LevelDB, to reduce disk contention but also make rsync style backup strategies more efficient
|
||||||
|
|
||||||
|
The system context for the store at conception is as a Riak backend store with a complete set of backend capabilities, but one intended to be use with relatively frequent iterators, and values of non-trivial size (e.g. > 4KB).
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
The store is written in Erlang using the actor model, the primary actors being:
|
||||||
|
|
||||||
|
- A Bookie
|
||||||
|
- An Inker
|
||||||
|
- A Penciller
|
||||||
|
- Worker Clerks
|
||||||
|
- File Clerks
|
||||||
|
|
||||||
|
### The Bookie
|
||||||
|
|
||||||
|
The Bookie provides the public interface of the store, liaising with the Inker and the Penciller to resolve requests to put new objects, and fetch those objects. The Bookie keeps a copy of key changes and object metadata associated with recent modifications, but otherwise has no direct access to state within the store. The Bookie can replicate the Penciller and the Inker to provide clones of the store. These clones can be used for querying across the store at a given snapshot.
|
||||||
|
|
||||||
|
### The Inker
|
||||||
|
|
||||||
|
The Inker is responsible for keeping the Journal of all changes which have been made to the store, with new writes being append to the end of the latest journal file. The Journal is an ordered log of activity by sequence number.
|
||||||
|
|
||||||
|
Changes to the store should be acknowledged if and only if they have been persisted to the Journal. The Inker can find a value in the Journal through a manifest which provides a map between sequence numbers and Journal files. The Inker can only efficiently find a value in the store if the sequence number is known, and so the sequence number is always part of the metadata maintained by the Penciller in the Ledger.
|
||||||
|
|
||||||
|
The Inker can also scan the Journal from a particular sequence number, for example to recover the Penciller's lost in-memory state following a shutdown.
|
||||||
|
|
||||||
|
### The Penciller
|
||||||
|
|
||||||
|
The Penciller is responsible for maintaining a Ledger of Keys, Index entries and Metadata (including the sequence number) that represent a near-real-time view of the contents of the store. The Ledger is a merge tree ordered into Levels of exponentially increasing size, with each level being ordered across files and within files by Key. Get requests are handled by checking each level in turn - from the top (Level 0), to the basement (up to Level 8). The first match for a given key is the returned answer.
|
||||||
|
|
||||||
|
Changes ripple down the levels in batches and require frequent rewriting of files, in particular at higher levels. As the Ledger does not contain the full object values, this write amplification associated with the flow down the levels is limited to the size of the key and metadata.
|
||||||
|
|
||||||
|
The Penciller keeps an in-memory view of new changes that have yet to be persisted in the Ledger, and at startup can request the Inker to replay any missing changes by scanning the Journal.
|
||||||
|
|
||||||
|
### Worker Clerks
|
||||||
|
|
||||||
|
Both the Inker and the Penciller must undertake compaction work. The Inker must garbage collect replaced or deleted objects form the Journal. The Penciller must merge files down the tree to free-up capacity for new writes at the top of the Ledger.
|
||||||
|
|
||||||
|
Both the Penciller and the Inker each make use of their own dedicated clerk for completing this work. The Clerk will add all new files necessary to represent the new view of that part of the store, and then update the Inker/Penciller with the new manifest that represents that view. Once the update has been acknowledged, any removed files can be marked as delete_pending, and they will poll the Inker (if a Journal file) or Penciller (if a Ledger file) for it to confirm that no users of the system still depend on the old snapshot of the store to be maintained.
|
||||||
|
|
||||||
|
### File Clerks
|
||||||
|
|
||||||
|
Every file within the store has is owned by its own dedicated process (modelled as a finite state machine). Files are never created or accessed by the Inker or the Penciller, interactions with the files are managed through messages sent to the File Clerk processes which own the files.
|
||||||
|
|
||||||
|
The File Clerks themselves are ignorant to their context within the store. For example a file in the Ledger does not know what level of the Tree it resides in. The state of the store is represented by the Manifest which maintains a picture of the store, and contains the process IDs of the file clerks which represent the files.
|
||||||
|
|
||||||
|
Cloning of the store does not require any file-system level activity - a clone simply needs to know the manifest so that it can independently make requests of the File Clerk processes, and register itself with the Inker/Penciller so that those files are not deleted whilst the clone is active.
|
||||||
|
|
||||||
|
The Journal files use a constant database format almost exactly replicating the CDB format originally designed by DJ Bernstein. The Ledger files use a bespoke format with is based on Google's SST format, with the primary difference being that the bloom filters used to protect against unnecessary lookups are based on the Riak Segment IDs of the key, and use single-hash rice-encoded sets rather using the traditional bloom filter size-optimisation model of extending the number of hashes used to reduce the false-positive rate.
|
||||||
|
|
||||||
|
File clerks spend a short initial portion of their life in a writable state. Once they have left a writing state, they will for the remainder of their life-cycle, be in an immutable read-only state.
|
||||||
|
|
||||||
|
## Paths
|
||||||
|
|
||||||
|
The PUT path for new objects and object changes depends on the Bookie interacting with the Inker to ensure that the change has been persisted with the Journal, the Ledger is updated in batches after the PUT has been completed.
|
||||||
|
|
||||||
|
The HEAD path needs the Bookie to look in his cache of recent Ledger changes, and if the change is not present consult with the Penciller.
|
||||||
|
|
||||||
|
The GET path follows the HEAD path, but once the sequence number has been determined through the response from the Ledger the object itself is fetched from the journal via the Inker.
|
||||||
|
|
||||||
|
All other queries (folds over indexes, keys and objects) are managed by cloning either the Penciller, or the Penciller and the Inker.
|
||||||
|
|
||||||
|
## Trade-Offs
|
||||||
|
|
||||||
|
Further information of specific design trade-off decisions is provided:
|
||||||
|
|
||||||
|
- What is a log-structured merge tree?
|
||||||
|
- Memory management
|
||||||
|
- Backup and Recovery
|
||||||
|
- The Penciller memory
|
||||||
|
- File formats
|
||||||
|
- Stalling, pausing and back-pressure
|
||||||
|
- Riak Anti-Entropy
|
||||||
|
- Riak and HEAD requests
|
||||||
|
- Riak and alternative queries
|
||||||
|
|
||||||
|
## Naming Things is Hard
|
||||||
|
|
||||||
|
The naming of actors within the model is very loosely based on the slang associated with an on-course Bookmaker.
|
||||||
|
|
||||||
|
## Learning
|
||||||
|
|
||||||
|
The project was started in part as a learning exercise. This is my first Erlang project, and has been used to try and familiarise myself with Erlang concepts. However, there are undoubtedly many lessons still to be learned about how to write good Erlang OTP applications.
|
|
@ -46,6 +46,7 @@
|
||||||
-record(cdb_options,
|
-record(cdb_options,
|
||||||
{max_size :: integer(),
|
{max_size :: integer(),
|
||||||
file_path :: string(),
|
file_path :: string(),
|
||||||
|
waste_path :: string(),
|
||||||
binary_mode = false :: boolean()}).
|
binary_mode = false :: boolean()}).
|
||||||
|
|
||||||
-record(inker_options,
|
-record(inker_options,
|
||||||
|
@ -55,6 +56,7 @@
|
||||||
start_snapshot = false :: boolean(),
|
start_snapshot = false :: boolean(),
|
||||||
source_inker :: pid(),
|
source_inker :: pid(),
|
||||||
reload_strategy = [] :: list(),
|
reload_strategy = [] :: list(),
|
||||||
|
waste_retention_period :: integer(),
|
||||||
max_run_length}).
|
max_run_length}).
|
||||||
|
|
||||||
-record(penciller_options,
|
-record(penciller_options,
|
||||||
|
@ -66,7 +68,8 @@
|
||||||
-record(iclerk_options,
|
-record(iclerk_options,
|
||||||
{inker :: pid(),
|
{inker :: pid(),
|
||||||
max_run_length :: integer(),
|
max_run_length :: integer(),
|
||||||
cdb_options :: #cdb_options{},
|
cdb_options = #cdb_options{} :: #cdb_options{},
|
||||||
|
waste_retention_period :: integer(),
|
||||||
reload_strategy = [] :: list()}).
|
reload_strategy = [] :: list()}).
|
||||||
|
|
||||||
-record(r_content, {
|
-record(r_content, {
|
||||||
|
|
|
@ -132,7 +132,8 @@
|
||||||
book_snapshotledger/3,
|
book_snapshotledger/3,
|
||||||
book_compactjournal/2,
|
book_compactjournal/2,
|
||||||
book_islastcompactionpending/1,
|
book_islastcompactionpending/1,
|
||||||
book_close/1]).
|
book_close/1,
|
||||||
|
book_destroy/1]).
|
||||||
|
|
||||||
-export([get_opt/2,
|
-export([get_opt/2,
|
||||||
get_opt/3]).
|
get_opt/3]).
|
||||||
|
@ -214,6 +215,9 @@ book_islastcompactionpending(Pid) ->
|
||||||
book_close(Pid) ->
|
book_close(Pid) ->
|
||||||
gen_server:call(Pid, close, infinity).
|
gen_server:call(Pid, close, infinity).
|
||||||
|
|
||||||
|
book_destroy(Pid) ->
|
||||||
|
gen_server:call(Pid, destroy, infinity).
|
||||||
|
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
%%% gen_server callbacks
|
%%% gen_server callbacks
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
|
@ -335,19 +339,29 @@ handle_call({return_folder, FolderType}, _From, State) ->
|
||||||
{reply,
|
{reply,
|
||||||
bucket_stats(State, Bucket, ?RIAK_TAG),
|
bucket_stats(State, Bucket, ?RIAK_TAG),
|
||||||
State};
|
State};
|
||||||
|
{binary_bucketlist, Tag, {FoldKeysFun, Acc}} ->
|
||||||
|
{reply,
|
||||||
|
binary_bucketlist(State, Tag, {FoldKeysFun, Acc}),
|
||||||
|
State};
|
||||||
{index_query,
|
{index_query,
|
||||||
Bucket,
|
Constraint,
|
||||||
|
{FoldKeysFun, Acc},
|
||||||
{IdxField, StartValue, EndValue},
|
{IdxField, StartValue, EndValue},
|
||||||
{ReturnTerms, TermRegex}} ->
|
{ReturnTerms, TermRegex}} ->
|
||||||
{reply,
|
{reply,
|
||||||
index_query(State,
|
index_query(State,
|
||||||
Bucket,
|
Constraint,
|
||||||
|
{FoldKeysFun, Acc},
|
||||||
{IdxField, StartValue, EndValue},
|
{IdxField, StartValue, EndValue},
|
||||||
{ReturnTerms, TermRegex}),
|
{ReturnTerms, TermRegex}),
|
||||||
State};
|
State};
|
||||||
{keylist, Tag} ->
|
{keylist, Tag, {FoldKeysFun, Acc}} ->
|
||||||
{reply,
|
{reply,
|
||||||
allkey_query(State, Tag),
|
allkey_query(State, Tag, {FoldKeysFun, Acc}),
|
||||||
|
State};
|
||||||
|
{keylist, Tag, Bucket, {FoldKeysFun, Acc}} ->
|
||||||
|
{reply,
|
||||||
|
bucketkey_query(State, Tag, Bucket, {FoldKeysFun, Acc}),
|
||||||
State};
|
State};
|
||||||
{hashtree_query, Tag, JournalCheck} ->
|
{hashtree_query, Tag, JournalCheck} ->
|
||||||
{reply,
|
{reply,
|
||||||
|
@ -382,7 +396,9 @@ handle_call({compact_journal, Timeout}, _From, State) ->
|
||||||
handle_call(confirm_compact, _From, State) ->
|
handle_call(confirm_compact, _From, State) ->
|
||||||
{reply, leveled_inker:ink_compactionpending(State#state.inker), State};
|
{reply, leveled_inker:ink_compactionpending(State#state.inker), State};
|
||||||
handle_call(close, _From, State) ->
|
handle_call(close, _From, State) ->
|
||||||
{stop, normal, ok, State}.
|
{stop, normal, ok, State};
|
||||||
|
handle_call(destroy, _From, State=#state{is_snapshot=Snp}) when Snp == false ->
|
||||||
|
{stop, destroy, ok, State}.
|
||||||
|
|
||||||
handle_cast(_Msg, State) ->
|
handle_cast(_Msg, State) ->
|
||||||
{noreply, State}.
|
{noreply, State}.
|
||||||
|
@ -390,6 +406,13 @@ handle_cast(_Msg, State) ->
|
||||||
handle_info(_Info, State) ->
|
handle_info(_Info, State) ->
|
||||||
{noreply, State}.
|
{noreply, State}.
|
||||||
|
|
||||||
|
terminate(destroy, State) ->
|
||||||
|
leveled_log:log("B0011", []),
|
||||||
|
{ok, InkPathList} = leveled_inker:ink_doom(State#state.inker),
|
||||||
|
{ok, PCLPathList} = leveled_penciller:pcl_doom(State#state.penciller),
|
||||||
|
lists:foreach(fun(DirPath) -> delete_path(DirPath) end, InkPathList),
|
||||||
|
lists:foreach(fun(DirPath) -> delete_path(DirPath) end, PCLPathList),
|
||||||
|
ok;
|
||||||
terminate(Reason, State) ->
|
terminate(Reason, State) ->
|
||||||
leveled_log:log("B0003", [Reason]),
|
leveled_log:log("B0003", [Reason]),
|
||||||
ok = leveled_inker:ink_close(State#state.inker),
|
ok = leveled_inker:ink_close(State#state.inker),
|
||||||
|
@ -424,10 +447,9 @@ bucket_stats(State, Bucket, Tag) ->
|
||||||
end,
|
end,
|
||||||
{async, Folder}.
|
{async, Folder}.
|
||||||
|
|
||||||
index_query(State,
|
|
||||||
Bucket,
|
binary_bucketlist(State, Tag, {FoldBucketsFun, InitAcc}) ->
|
||||||
{IdxField, StartValue, EndValue},
|
% List buckets for tag, assuming bucket names are all binary type
|
||||||
{ReturnTerms, TermRegex}) ->
|
|
||||||
{ok,
|
{ok,
|
||||||
{LedgerSnapshot, LedgerCache},
|
{LedgerSnapshot, LedgerCache},
|
||||||
_JournalSnapshot} = snapshot_store(State, ledger),
|
_JournalSnapshot} = snapshot_store(State, ledger),
|
||||||
|
@ -435,22 +457,83 @@ index_query(State,
|
||||||
leveled_log:log("B0004", [gb_trees:size(LedgerCache)]),
|
leveled_log:log("B0004", [gb_trees:size(LedgerCache)]),
|
||||||
ok = leveled_penciller:pcl_loadsnapshot(LedgerSnapshot,
|
ok = leveled_penciller:pcl_loadsnapshot(LedgerSnapshot,
|
||||||
LedgerCache),
|
LedgerCache),
|
||||||
StartKey = leveled_codec:to_ledgerkey(Bucket, null, ?IDX_TAG,
|
BucketAcc = get_nextbucket(null,
|
||||||
IdxField, StartValue),
|
Tag,
|
||||||
EndKey = leveled_codec:to_ledgerkey(Bucket, null, ?IDX_TAG,
|
LedgerSnapshot,
|
||||||
IdxField, EndValue),
|
[]),
|
||||||
|
ok = leveled_penciller:pcl_close(LedgerSnapshot),
|
||||||
|
lists:foldl(fun({B, _K}, Acc) -> FoldBucketsFun(B, Acc) end,
|
||||||
|
InitAcc,
|
||||||
|
BucketAcc)
|
||||||
|
end,
|
||||||
|
{async, Folder}.
|
||||||
|
|
||||||
|
get_nextbucket(NextBucket, Tag, LedgerSnapshot, BKList) ->
|
||||||
|
StartKey = leveled_codec:to_ledgerkey(NextBucket, null, Tag),
|
||||||
|
EndKey = leveled_codec:to_ledgerkey(null, null, Tag),
|
||||||
|
ExtractFun = fun(LK, _V, _Acc) -> leveled_codec:from_ledgerkey(LK) end,
|
||||||
|
BK = leveled_penciller:pcl_fetchnextkey(LedgerSnapshot,
|
||||||
|
StartKey,
|
||||||
|
EndKey,
|
||||||
|
ExtractFun,
|
||||||
|
null),
|
||||||
|
case BK of
|
||||||
|
null ->
|
||||||
|
leveled_log:log("B0008",[]),
|
||||||
|
BKList;
|
||||||
|
{B, K} when is_binary(B) ->
|
||||||
|
leveled_log:log("B0009",[B]),
|
||||||
|
get_nextbucket(<<B/binary, 0>>,
|
||||||
|
Tag,
|
||||||
|
LedgerSnapshot,
|
||||||
|
[{B, K}|BKList]);
|
||||||
|
NB ->
|
||||||
|
leveled_log:log("B0010",[NB]),
|
||||||
|
[]
|
||||||
|
end.
|
||||||
|
|
||||||
|
|
||||||
|
index_query(State,
|
||||||
|
Constraint,
|
||||||
|
{FoldKeysFun, InitAcc},
|
||||||
|
{IdxField, StartValue, EndValue},
|
||||||
|
{ReturnTerms, TermRegex}) ->
|
||||||
|
{ok,
|
||||||
|
{LedgerSnapshot, LedgerCache},
|
||||||
|
_JournalSnapshot} = snapshot_store(State, ledger),
|
||||||
|
{Bucket, StartObjKey} =
|
||||||
|
case Constraint of
|
||||||
|
{B, SK} ->
|
||||||
|
{B, SK};
|
||||||
|
B ->
|
||||||
|
{B, null}
|
||||||
|
end,
|
||||||
|
Folder = fun() ->
|
||||||
|
leveled_log:log("B0004", [gb_trees:size(LedgerCache)]),
|
||||||
|
ok = leveled_penciller:pcl_loadsnapshot(LedgerSnapshot,
|
||||||
|
LedgerCache),
|
||||||
|
StartKey = leveled_codec:to_ledgerkey(Bucket,
|
||||||
|
StartObjKey,
|
||||||
|
?IDX_TAG,
|
||||||
|
IdxField,
|
||||||
|
StartValue),
|
||||||
|
EndKey = leveled_codec:to_ledgerkey(Bucket,
|
||||||
|
null,
|
||||||
|
?IDX_TAG,
|
||||||
|
IdxField,
|
||||||
|
EndValue),
|
||||||
AddFun = case ReturnTerms of
|
AddFun = case ReturnTerms of
|
||||||
true ->
|
true ->
|
||||||
fun add_terms/3;
|
fun add_terms/2;
|
||||||
_ ->
|
_ ->
|
||||||
fun add_keys/3
|
fun add_keys/2
|
||||||
end,
|
end,
|
||||||
AccFun = accumulate_index(TermRegex, AddFun),
|
AccFun = accumulate_index(TermRegex, AddFun, FoldKeysFun),
|
||||||
Acc = leveled_penciller:pcl_fetchkeys(LedgerSnapshot,
|
Acc = leveled_penciller:pcl_fetchkeys(LedgerSnapshot,
|
||||||
StartKey,
|
StartKey,
|
||||||
EndKey,
|
EndKey,
|
||||||
AccFun,
|
AccFun,
|
||||||
[]),
|
InitAcc),
|
||||||
ok = leveled_penciller:pcl_close(LedgerSnapshot),
|
ok = leveled_penciller:pcl_close(LedgerSnapshot),
|
||||||
Acc
|
Acc
|
||||||
end,
|
end,
|
||||||
|
@ -535,7 +618,7 @@ foldobjects(State, Tag, StartKey, EndKey, FoldObjectsFun) ->
|
||||||
{async, Folder}.
|
{async, Folder}.
|
||||||
|
|
||||||
|
|
||||||
allkey_query(State, Tag) ->
|
bucketkey_query(State, Tag, Bucket, {FoldKeysFun, InitAcc}) ->
|
||||||
{ok,
|
{ok,
|
||||||
{LedgerSnapshot, LedgerCache},
|
{LedgerSnapshot, LedgerCache},
|
||||||
_JournalSnapshot} = snapshot_store(State, ledger),
|
_JournalSnapshot} = snapshot_store(State, ledger),
|
||||||
|
@ -543,19 +626,22 @@ allkey_query(State, Tag) ->
|
||||||
leveled_log:log("B0004", [gb_trees:size(LedgerCache)]),
|
leveled_log:log("B0004", [gb_trees:size(LedgerCache)]),
|
||||||
ok = leveled_penciller:pcl_loadsnapshot(LedgerSnapshot,
|
ok = leveled_penciller:pcl_loadsnapshot(LedgerSnapshot,
|
||||||
LedgerCache),
|
LedgerCache),
|
||||||
SK = leveled_codec:to_ledgerkey(null, null, Tag),
|
SK = leveled_codec:to_ledgerkey(Bucket, null, Tag),
|
||||||
EK = leveled_codec:to_ledgerkey(null, null, Tag),
|
EK = leveled_codec:to_ledgerkey(Bucket, null, Tag),
|
||||||
AccFun = accumulate_keys(),
|
AccFun = accumulate_keys(FoldKeysFun),
|
||||||
Acc = leveled_penciller:pcl_fetchkeys(LedgerSnapshot,
|
Acc = leveled_penciller:pcl_fetchkeys(LedgerSnapshot,
|
||||||
SK,
|
SK,
|
||||||
EK,
|
EK,
|
||||||
AccFun,
|
AccFun,
|
||||||
[]),
|
InitAcc),
|
||||||
ok = leveled_penciller:pcl_close(LedgerSnapshot),
|
ok = leveled_penciller:pcl_close(LedgerSnapshot),
|
||||||
lists:reverse(Acc)
|
lists:reverse(Acc)
|
||||||
end,
|
end,
|
||||||
{async, Folder}.
|
{async, Folder}.
|
||||||
|
|
||||||
|
allkey_query(State, Tag, {FoldKeysFun, InitAcc}) ->
|
||||||
|
bucketkey_query(State, Tag, null, {FoldKeysFun, InitAcc}).
|
||||||
|
|
||||||
|
|
||||||
snapshot_store(State, SnapType) ->
|
snapshot_store(State, SnapType) ->
|
||||||
PCLopts = #penciller_options{start_snapshot=true,
|
PCLopts = #penciller_options{start_snapshot=true,
|
||||||
|
@ -576,11 +662,14 @@ snapshot_store(State, SnapType) ->
|
||||||
set_options(Opts) ->
|
set_options(Opts) ->
|
||||||
MaxJournalSize = get_opt(max_journalsize, Opts, 10000000000),
|
MaxJournalSize = get_opt(max_journalsize, Opts, 10000000000),
|
||||||
|
|
||||||
|
WRP = get_opt(waste_retention_period, Opts),
|
||||||
|
|
||||||
AltStrategy = get_opt(reload_strategy, Opts, []),
|
AltStrategy = get_opt(reload_strategy, Opts, []),
|
||||||
ReloadStrategy = leveled_codec:inker_reload_strategy(AltStrategy),
|
ReloadStrategy = leveled_codec:inker_reload_strategy(AltStrategy),
|
||||||
|
|
||||||
PCLL0CacheSize = get_opt(max_pencillercachesize, Opts),
|
PCLL0CacheSize = get_opt(max_pencillercachesize, Opts),
|
||||||
RootPath = get_opt(root_path, Opts),
|
RootPath = get_opt(root_path, Opts),
|
||||||
|
|
||||||
JournalFP = RootPath ++ "/" ++ ?JOURNAL_FP,
|
JournalFP = RootPath ++ "/" ++ ?JOURNAL_FP,
|
||||||
LedgerFP = RootPath ++ "/" ++ ?LEDGER_FP,
|
LedgerFP = RootPath ++ "/" ++ ?LEDGER_FP,
|
||||||
ok =filelib:ensure_dir(JournalFP),
|
ok =filelib:ensure_dir(JournalFP),
|
||||||
|
@ -589,6 +678,7 @@ set_options(Opts) ->
|
||||||
{#inker_options{root_path = JournalFP,
|
{#inker_options{root_path = JournalFP,
|
||||||
reload_strategy = ReloadStrategy,
|
reload_strategy = ReloadStrategy,
|
||||||
max_run_length = get_opt(max_run_length, Opts),
|
max_run_length = get_opt(max_run_length, Opts),
|
||||||
|
waste_retention_period = WRP,
|
||||||
cdb_options = #cdb_options{max_size=MaxJournalSize,
|
cdb_options = #cdb_options{max_size=MaxJournalSize,
|
||||||
binary_mode=true}},
|
binary_mode=true}},
|
||||||
#penciller_options{root_path = LedgerFP,
|
#penciller_options{root_path = LedgerFP,
|
||||||
|
@ -701,35 +791,36 @@ check_presence(Key, Value, InkerClone) ->
|
||||||
false
|
false
|
||||||
end.
|
end.
|
||||||
|
|
||||||
accumulate_keys() ->
|
accumulate_keys(FoldKeysFun) ->
|
||||||
Now = leveled_codec:integer_now(),
|
Now = leveled_codec:integer_now(),
|
||||||
AccFun = fun(Key, Value, KeyList) ->
|
AccFun = fun(Key, Value, Acc) ->
|
||||||
case leveled_codec:is_active(Key, Value, Now) of
|
case leveled_codec:is_active(Key, Value, Now) of
|
||||||
true ->
|
true ->
|
||||||
[leveled_codec:from_ledgerkey(Key)|KeyList];
|
{B, K} = leveled_codec:from_ledgerkey(Key),
|
||||||
|
FoldKeysFun(B, K, Acc);
|
||||||
false ->
|
false ->
|
||||||
KeyList
|
Acc
|
||||||
end
|
end
|
||||||
end,
|
end,
|
||||||
AccFun.
|
AccFun.
|
||||||
|
|
||||||
add_keys(ObjKey, _IdxValue, Acc) ->
|
add_keys(ObjKey, _IdxValue) ->
|
||||||
Acc ++ [ObjKey].
|
ObjKey.
|
||||||
|
|
||||||
add_terms(ObjKey, IdxValue, Acc) ->
|
add_terms(ObjKey, IdxValue) ->
|
||||||
Acc ++ [{IdxValue, ObjKey}].
|
{IdxValue, ObjKey}.
|
||||||
|
|
||||||
accumulate_index(TermRe, AddFun) ->
|
accumulate_index(TermRe, AddFun, FoldKeysFun) ->
|
||||||
Now = leveled_codec:integer_now(),
|
Now = leveled_codec:integer_now(),
|
||||||
case TermRe of
|
case TermRe of
|
||||||
undefined ->
|
undefined ->
|
||||||
fun(Key, Value, Acc) ->
|
fun(Key, Value, Acc) ->
|
||||||
case leveled_codec:is_active(Key, Value, Now) of
|
case leveled_codec:is_active(Key, Value, Now) of
|
||||||
true ->
|
true ->
|
||||||
{_Bucket,
|
{Bucket,
|
||||||
ObjKey,
|
ObjKey,
|
||||||
IdxValue} = leveled_codec:from_ledgerkey(Key),
|
IdxValue} = leveled_codec:from_ledgerkey(Key),
|
||||||
AddFun(ObjKey, IdxValue, Acc);
|
FoldKeysFun(Bucket, AddFun(ObjKey, IdxValue), Acc);
|
||||||
false ->
|
false ->
|
||||||
Acc
|
Acc
|
||||||
end end;
|
end end;
|
||||||
|
@ -737,14 +828,16 @@ accumulate_index(TermRe, AddFun) ->
|
||||||
fun(Key, Value, Acc) ->
|
fun(Key, Value, Acc) ->
|
||||||
case leveled_codec:is_active(Key, Value, Now) of
|
case leveled_codec:is_active(Key, Value, Now) of
|
||||||
true ->
|
true ->
|
||||||
{_Bucket,
|
{Bucket,
|
||||||
ObjKey,
|
ObjKey,
|
||||||
IdxValue} = leveled_codec:from_ledgerkey(Key),
|
IdxValue} = leveled_codec:from_ledgerkey(Key),
|
||||||
case re:run(IdxValue, TermRe) of
|
case re:run(IdxValue, TermRe) of
|
||||||
nomatch ->
|
nomatch ->
|
||||||
Acc;
|
Acc;
|
||||||
_ ->
|
_ ->
|
||||||
AddFun(ObjKey, IdxValue, Acc)
|
FoldKeysFun(Bucket,
|
||||||
|
AddFun(ObjKey, IdxValue),
|
||||||
|
Acc)
|
||||||
end;
|
end;
|
||||||
false ->
|
false ->
|
||||||
Acc
|
Acc
|
||||||
|
@ -836,16 +929,16 @@ get_opt(Key, Opts) ->
|
||||||
get_opt(Key, Opts, Default) ->
|
get_opt(Key, Opts, Default) ->
|
||||||
case proplists:get_value(Key, Opts) of
|
case proplists:get_value(Key, Opts) of
|
||||||
undefined ->
|
undefined ->
|
||||||
case application:get_env(?MODULE, Key) of
|
Default;
|
||||||
{ok, Value} ->
|
|
||||||
Value;
|
|
||||||
undefined ->
|
|
||||||
Default
|
|
||||||
end;
|
|
||||||
Value ->
|
Value ->
|
||||||
Value
|
Value
|
||||||
end.
|
end.
|
||||||
|
|
||||||
|
delete_path(DirPath) ->
|
||||||
|
ok = filelib:ensure_dir(DirPath),
|
||||||
|
{ok, Files} = file:list_dir(DirPath),
|
||||||
|
[file:delete(filename:join([DirPath, File])) || File <- Files],
|
||||||
|
file:del_dir(DirPath).
|
||||||
|
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
%%% Test
|
%%% Test
|
||||||
|
@ -930,28 +1023,28 @@ multi_key_test() ->
|
||||||
C2 = #r_content{metadata=MD2, value=V2},
|
C2 = #r_content{metadata=MD2, value=V2},
|
||||||
Obj2 = #r_object{bucket=B2, key=K2, contents=[C2], vclock=[{'a',1}]},
|
Obj2 = #r_object{bucket=B2, key=K2, contents=[C2], vclock=[{'a',1}]},
|
||||||
ok = book_put(Bookie1, B1, K1, Obj1, Spec1, ?RIAK_TAG),
|
ok = book_put(Bookie1, B1, K1, Obj1, Spec1, ?RIAK_TAG),
|
||||||
ObjL1 = generate_multiple_robjects(100, 3),
|
ObjL1 = generate_multiple_robjects(20, 3),
|
||||||
SW1 = os:timestamp(),
|
SW1 = os:timestamp(),
|
||||||
lists:foreach(fun({O, S}) ->
|
lists:foreach(fun({O, S}) ->
|
||||||
{B, K} = leveled_codec:riakto_keydetails(O),
|
{B, K} = leveled_codec:riakto_keydetails(O),
|
||||||
ok = book_put(Bookie1, B, K, O, S, ?RIAK_TAG)
|
ok = book_put(Bookie1, B, K, O, S, ?RIAK_TAG)
|
||||||
end,
|
end,
|
||||||
ObjL1),
|
ObjL1),
|
||||||
io:format("PUT of 100 objects completed in ~w microseconds~n",
|
io:format("PUT of 20 objects completed in ~w microseconds~n",
|
||||||
[timer:now_diff(os:timestamp(),SW1)]),
|
[timer:now_diff(os:timestamp(),SW1)]),
|
||||||
ok = book_put(Bookie1, B2, K2, Obj2, Spec2, ?RIAK_TAG),
|
ok = book_put(Bookie1, B2, K2, Obj2, Spec2, ?RIAK_TAG),
|
||||||
{ok, F1A} = book_get(Bookie1, B1, K1, ?RIAK_TAG),
|
{ok, F1A} = book_get(Bookie1, B1, K1, ?RIAK_TAG),
|
||||||
?assertMatch(F1A, Obj1),
|
?assertMatch(F1A, Obj1),
|
||||||
{ok, F2A} = book_get(Bookie1, B2, K2, ?RIAK_TAG),
|
{ok, F2A} = book_get(Bookie1, B2, K2, ?RIAK_TAG),
|
||||||
?assertMatch(F2A, Obj2),
|
?assertMatch(F2A, Obj2),
|
||||||
ObjL2 = generate_multiple_robjects(100, 103),
|
ObjL2 = generate_multiple_robjects(20, 23),
|
||||||
SW2 = os:timestamp(),
|
SW2 = os:timestamp(),
|
||||||
lists:foreach(fun({O, S}) ->
|
lists:foreach(fun({O, S}) ->
|
||||||
{B, K} = leveled_codec:riakto_keydetails(O),
|
{B, K} = leveled_codec:riakto_keydetails(O),
|
||||||
ok = book_put(Bookie1, B, K, O, S, ?RIAK_TAG)
|
ok = book_put(Bookie1, B, K, O, S, ?RIAK_TAG)
|
||||||
end,
|
end,
|
||||||
ObjL2),
|
ObjL2),
|
||||||
io:format("PUT of 100 objects completed in ~w microseconds~n",
|
io:format("PUT of 20 objects completed in ~w microseconds~n",
|
||||||
[timer:now_diff(os:timestamp(),SW2)]),
|
[timer:now_diff(os:timestamp(),SW2)]),
|
||||||
{ok, F1B} = book_get(Bookie1, B1, K1, ?RIAK_TAG),
|
{ok, F1B} = book_get(Bookie1, B1, K1, ?RIAK_TAG),
|
||||||
?assertMatch(F1B, Obj1),
|
?assertMatch(F1B, Obj1),
|
||||||
|
@ -964,14 +1057,14 @@ multi_key_test() ->
|
||||||
?assertMatch(F1C, Obj1),
|
?assertMatch(F1C, Obj1),
|
||||||
{ok, F2C} = book_get(Bookie2, B2, K2, ?RIAK_TAG),
|
{ok, F2C} = book_get(Bookie2, B2, K2, ?RIAK_TAG),
|
||||||
?assertMatch(F2C, Obj2),
|
?assertMatch(F2C, Obj2),
|
||||||
ObjL3 = generate_multiple_robjects(100, 203),
|
ObjL3 = generate_multiple_robjects(20, 43),
|
||||||
SW3 = os:timestamp(),
|
SW3 = os:timestamp(),
|
||||||
lists:foreach(fun({O, S}) ->
|
lists:foreach(fun({O, S}) ->
|
||||||
{B, K} = leveled_codec:riakto_keydetails(O),
|
{B, K} = leveled_codec:riakto_keydetails(O),
|
||||||
ok = book_put(Bookie2, B, K, O, S, ?RIAK_TAG)
|
ok = book_put(Bookie2, B, K, O, S, ?RIAK_TAG)
|
||||||
end,
|
end,
|
||||||
ObjL3),
|
ObjL3),
|
||||||
io:format("PUT of 100 objects completed in ~w microseconds~n",
|
io:format("PUT of 20 objects completed in ~w microseconds~n",
|
||||||
[timer:now_diff(os:timestamp(),SW3)]),
|
[timer:now_diff(os:timestamp(),SW3)]),
|
||||||
{ok, F1D} = book_get(Bookie2, B1, K1, ?RIAK_TAG),
|
{ok, F1D} = book_get(Bookie2, B1, K1, ?RIAK_TAG),
|
||||||
?assertMatch(F1D, Obj1),
|
?assertMatch(F1D, Obj1),
|
||||||
|
@ -1020,10 +1113,12 @@ ttl_test() ->
|
||||||
{bucket_stats, "Bucket"}),
|
{bucket_stats, "Bucket"}),
|
||||||
{_Size, Count} = BucketFolder(),
|
{_Size, Count} = BucketFolder(),
|
||||||
?assertMatch(100, Count),
|
?assertMatch(100, Count),
|
||||||
|
FoldKeysFun = fun(_B, Item, FKFAcc) -> FKFAcc ++ [Item] end,
|
||||||
{async,
|
{async,
|
||||||
IndexFolder} = book_returnfolder(Bookie1,
|
IndexFolder} = book_returnfolder(Bookie1,
|
||||||
{index_query,
|
{index_query,
|
||||||
"Bucket",
|
"Bucket",
|
||||||
|
{FoldKeysFun, []},
|
||||||
{"idx1_bin", "f8", "f9"},
|
{"idx1_bin", "f8", "f9"},
|
||||||
{false, undefined}}),
|
{false, undefined}}),
|
||||||
KeyList = IndexFolder(),
|
KeyList = IndexFolder(),
|
||||||
|
@ -1034,6 +1129,7 @@ ttl_test() ->
|
||||||
IndexFolderTR} = book_returnfolder(Bookie1,
|
IndexFolderTR} = book_returnfolder(Bookie1,
|
||||||
{index_query,
|
{index_query,
|
||||||
"Bucket",
|
"Bucket",
|
||||||
|
{FoldKeysFun, []},
|
||||||
{"idx1_bin", "f8", "f9"},
|
{"idx1_bin", "f8", "f9"},
|
||||||
{true, Regex}}),
|
{true, Regex}}),
|
||||||
TermKeyList = IndexFolderTR(),
|
TermKeyList = IndexFolderTR(),
|
||||||
|
@ -1046,6 +1142,7 @@ ttl_test() ->
|
||||||
IndexFolderTR2} = book_returnfolder(Bookie2,
|
IndexFolderTR2} = book_returnfolder(Bookie2,
|
||||||
{index_query,
|
{index_query,
|
||||||
"Bucket",
|
"Bucket",
|
||||||
|
{FoldKeysFun, []},
|
||||||
{"idx1_bin", "f7", "f9"},
|
{"idx1_bin", "f7", "f9"},
|
||||||
{false, Regex}}),
|
{false, Regex}}),
|
||||||
KeyList2 = IndexFolderTR2(),
|
KeyList2 = IndexFolderTR2(),
|
||||||
|
@ -1165,5 +1262,10 @@ foldobjects_vs_hashtree_test() ->
|
||||||
ok = book_close(Bookie1),
|
ok = book_close(Bookie1),
|
||||||
reset_filestructure().
|
reset_filestructure().
|
||||||
|
|
||||||
|
coverage_cheat_test() ->
|
||||||
|
{noreply, _State0} = handle_info(timeout, #state{}),
|
||||||
|
{ok, _State1} = code_change(null, #state{}, null),
|
||||||
|
{noreply, _State2} = handle_cast(null, #state{}).
|
||||||
|
|
||||||
|
|
||||||
-endif.
|
-endif.
|
||||||
|
|
|
@ -1,21 +1,19 @@
|
||||||
|
%% -------- CDB File Clerk ---------
|
||||||
%%
|
%%
|
||||||
%% This is a modified version of the cdb module provided by Tom Whitcomb.
|
%% This is a modified version of the cdb module provided by Tom Whitcomb.
|
||||||
%%
|
%%
|
||||||
%% - https://github.com/thomaswhitcomb/erlang-cdb
|
%% - https://github.com/thomaswhitcomb/erlang-cdb
|
||||||
%%
|
%%
|
||||||
|
%% The CDB module is an implementation of the constant database format
|
||||||
|
%% described by DJ Bernstein
|
||||||
|
%%
|
||||||
|
%% - https://cr.yp.to/cdb.html
|
||||||
|
%%
|
||||||
%% The primary differences are:
|
%% The primary differences are:
|
||||||
%% - Support for incrementally writing a CDB file while keeping the hash table
|
%% - Support for incrementally writing a CDB file while keeping the hash table
|
||||||
%% in memory
|
%% in memory
|
||||||
%% - The ability to scan a database and accumulate all the Key, Values to
|
|
||||||
%% rebuild in-memory tables on startup
|
|
||||||
%% - The ability to scan a database in blocks of sequence numbers
|
%% - The ability to scan a database in blocks of sequence numbers
|
||||||
%%
|
%% - The applictaion of a CRC chekc by default to all values
|
||||||
%% This is to be used in eleveledb, and in this context:
|
|
||||||
%% - Keys will be a combinatio of the PrimaryKey and the Sequence Number
|
|
||||||
%% - Values will be a serialised version on the whole object, and the
|
|
||||||
%% IndexChanges associated with the transaction
|
|
||||||
%% Where the IndexChanges are all the Key changes required to be added to the
|
|
||||||
%% ledger to complete the changes (the addition of postings and tombstones).
|
|
||||||
%%
|
%%
|
||||||
%% This module provides functions to create and query a CDB (constant database).
|
%% This module provides functions to create and query a CDB (constant database).
|
||||||
%% A CDB implements a two-level hashtable which provides fast {key,value}
|
%% A CDB implements a two-level hashtable which provides fast {key,value}
|
||||||
|
@ -81,6 +79,7 @@
|
||||||
cdb_complete/1,
|
cdb_complete/1,
|
||||||
cdb_roll/1,
|
cdb_roll/1,
|
||||||
cdb_returnhashtable/3,
|
cdb_returnhashtable/3,
|
||||||
|
cdb_checkhashtable/1,
|
||||||
cdb_destroy/1,
|
cdb_destroy/1,
|
||||||
cdb_deletepending/1,
|
cdb_deletepending/1,
|
||||||
cdb_deletepending/3,
|
cdb_deletepending/3,
|
||||||
|
@ -107,7 +106,8 @@
|
||||||
binary_mode = false :: boolean(),
|
binary_mode = false :: boolean(),
|
||||||
delete_point = 0 :: integer(),
|
delete_point = 0 :: integer(),
|
||||||
inker :: pid(),
|
inker :: pid(),
|
||||||
deferred_delete = false :: boolean()}).
|
deferred_delete = false :: boolean(),
|
||||||
|
waste_path :: string()}).
|
||||||
|
|
||||||
|
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
|
@ -151,21 +151,7 @@ cdb_directfetch(Pid, PositionList, Info) ->
|
||||||
gen_fsm:sync_send_event(Pid, {direct_fetch, PositionList, Info}, infinity).
|
gen_fsm:sync_send_event(Pid, {direct_fetch, PositionList, Info}, infinity).
|
||||||
|
|
||||||
cdb_close(Pid) ->
|
cdb_close(Pid) ->
|
||||||
cdb_close(Pid, ?PENDING_ROLL_WAIT).
|
gen_fsm:sync_send_all_state_event(Pid, cdb_close, infinity).
|
||||||
|
|
||||||
cdb_close(Pid, WaitsLeft) ->
|
|
||||||
if
|
|
||||||
WaitsLeft > 0 ->
|
|
||||||
case gen_fsm:sync_send_all_state_event(Pid, cdb_close, infinity) of
|
|
||||||
pending_roll ->
|
|
||||||
timer:sleep(1),
|
|
||||||
cdb_close(Pid, WaitsLeft - 1);
|
|
||||||
R ->
|
|
||||||
R
|
|
||||||
end;
|
|
||||||
true ->
|
|
||||||
gen_fsm:sync_send_event(Pid, cdb_kill, infinity)
|
|
||||||
end.
|
|
||||||
|
|
||||||
cdb_complete(Pid) ->
|
cdb_complete(Pid) ->
|
||||||
gen_fsm:sync_send_event(Pid, cdb_complete, infinity).
|
gen_fsm:sync_send_event(Pid, cdb_complete, infinity).
|
||||||
|
@ -176,10 +162,14 @@ cdb_roll(Pid) ->
|
||||||
cdb_returnhashtable(Pid, IndexList, HashTreeBin) ->
|
cdb_returnhashtable(Pid, IndexList, HashTreeBin) ->
|
||||||
gen_fsm:sync_send_event(Pid, {return_hashtable, IndexList, HashTreeBin}, infinity).
|
gen_fsm:sync_send_event(Pid, {return_hashtable, IndexList, HashTreeBin}, infinity).
|
||||||
|
|
||||||
|
cdb_checkhashtable(Pid) ->
|
||||||
|
gen_fsm:sync_send_event(Pid, check_hashtable).
|
||||||
|
|
||||||
cdb_destroy(Pid) ->
|
cdb_destroy(Pid) ->
|
||||||
gen_fsm:send_event(Pid, destroy).
|
gen_fsm:send_event(Pid, destroy).
|
||||||
|
|
||||||
cdb_deletepending(Pid) ->
|
cdb_deletepending(Pid) ->
|
||||||
|
% Only used in unit tests
|
||||||
cdb_deletepending(Pid, 0, no_poll).
|
cdb_deletepending(Pid, 0, no_poll).
|
||||||
|
|
||||||
cdb_deletepending(Pid, ManSQN, Inker) ->
|
cdb_deletepending(Pid, ManSQN, Inker) ->
|
||||||
|
@ -230,7 +220,9 @@ init([Opts]) ->
|
||||||
end,
|
end,
|
||||||
{ok,
|
{ok,
|
||||||
starting,
|
starting,
|
||||||
#state{max_size=MaxSize, binary_mode=Opts#cdb_options.binary_mode}}.
|
#state{max_size=MaxSize,
|
||||||
|
binary_mode=Opts#cdb_options.binary_mode,
|
||||||
|
waste_path=Opts#cdb_options.waste_path}}.
|
||||||
|
|
||||||
starting({open_writer, Filename}, _From, State) ->
|
starting({open_writer, Filename}, _From, State) ->
|
||||||
leveled_log:log("CDB01", [Filename]),
|
leveled_log:log("CDB01", [Filename]),
|
||||||
|
@ -343,9 +335,8 @@ rolling({return_hashtable, IndexList, HashTreeBin}, _From, State) ->
|
||||||
filename=NewName,
|
filename=NewName,
|
||||||
hash_index=Index}}
|
hash_index=Index}}
|
||||||
end;
|
end;
|
||||||
rolling(cdb_kill, _From, State) ->
|
rolling(check_hashtable, _From, State) ->
|
||||||
{stop, killed, ok, State}.
|
{reply, false, rolling, State}.
|
||||||
|
|
||||||
|
|
||||||
rolling({delete_pending, ManSQN, Inker}, State) ->
|
rolling({delete_pending, ManSQN, Inker}, State) ->
|
||||||
{next_state,
|
{next_state,
|
||||||
|
@ -409,7 +400,9 @@ reader({direct_fetch, PositionList, Info}, _From, State) ->
|
||||||
end;
|
end;
|
||||||
reader(cdb_complete, _From, State) ->
|
reader(cdb_complete, _From, State) ->
|
||||||
ok = file:close(State#state.handle),
|
ok = file:close(State#state.handle),
|
||||||
{stop, normal, {ok, State#state.filename}, State#state{handle=undefined}}.
|
{stop, normal, {ok, State#state.filename}, State#state{handle=undefined}};
|
||||||
|
reader(check_hashtable, _From, State) ->
|
||||||
|
{reply, true, reader, State}.
|
||||||
|
|
||||||
|
|
||||||
reader({delete_pending, 0, no_poll}, State) ->
|
reader({delete_pending, 0, no_poll}, State) ->
|
||||||
|
@ -439,32 +432,23 @@ delete_pending({key_check, Key}, _From, State) ->
|
||||||
State,
|
State,
|
||||||
?DELETE_TIMEOUT}.
|
?DELETE_TIMEOUT}.
|
||||||
|
|
||||||
delete_pending(timeout, State) ->
|
delete_pending(timeout, State=#state{delete_point=ManSQN}) when ManSQN > 0 ->
|
||||||
case State#state.delete_point of
|
case is_process_alive(State#state.inker) of
|
||||||
0 ->
|
true ->
|
||||||
{next_state, delete_pending, State};
|
case leveled_inker:ink_confirmdelete(State#state.inker, ManSQN) of
|
||||||
ManSQN ->
|
|
||||||
case is_process_alive(State#state.inker) of
|
|
||||||
true ->
|
true ->
|
||||||
case leveled_inker:ink_confirmdelete(State#state.inker,
|
leveled_log:log("CDB04", [State#state.filename, ManSQN]),
|
||||||
ManSQN) of
|
{stop, normal, State};
|
||||||
true ->
|
|
||||||
leveled_log:log("CDB04", [State#state.filename,
|
|
||||||
ManSQN]),
|
|
||||||
{stop, normal, State};
|
|
||||||
false ->
|
|
||||||
{next_state,
|
|
||||||
delete_pending,
|
|
||||||
State,
|
|
||||||
?DELETE_TIMEOUT}
|
|
||||||
end;
|
|
||||||
false ->
|
false ->
|
||||||
{stop, normal, State}
|
{next_state,
|
||||||
end
|
delete_pending,
|
||||||
|
State,
|
||||||
|
?DELETE_TIMEOUT}
|
||||||
|
end;
|
||||||
|
false ->
|
||||||
|
{stop, normal, State}
|
||||||
end;
|
end;
|
||||||
delete_pending(destroy, State) ->
|
delete_pending(destroy, State) ->
|
||||||
ok = file:close(State#state.handle),
|
|
||||||
ok = file:delete(State#state.filename),
|
|
||||||
{stop, normal, State}.
|
{stop, normal, State}.
|
||||||
|
|
||||||
|
|
||||||
|
@ -503,11 +487,8 @@ handle_sync_event(cdb_firstkey, _From, StateName, State) ->
|
||||||
{reply, FirstKey, StateName, State};
|
{reply, FirstKey, StateName, State};
|
||||||
handle_sync_event(cdb_filename, _From, StateName, State) ->
|
handle_sync_event(cdb_filename, _From, StateName, State) ->
|
||||||
{reply, State#state.filename, StateName, State};
|
{reply, State#state.filename, StateName, State};
|
||||||
handle_sync_event(cdb_close, _From, rolling, State) ->
|
|
||||||
{reply, pending_roll, rolling, State};
|
|
||||||
handle_sync_event(cdb_close, _From, _StateName, State) ->
|
handle_sync_event(cdb_close, _From, _StateName, State) ->
|
||||||
ok = file:close(State#state.handle),
|
{stop, normal, ok, State}.
|
||||||
{stop, normal, ok, State#state{handle=undefined}}.
|
|
||||||
|
|
||||||
handle_event(_Msg, StateName, State) ->
|
handle_event(_Msg, StateName, State) ->
|
||||||
{next_state, StateName, State}.
|
{next_state, StateName, State}.
|
||||||
|
@ -517,13 +498,18 @@ handle_info(_Msg, StateName, State) ->
|
||||||
|
|
||||||
terminate(Reason, StateName, State) ->
|
terminate(Reason, StateName, State) ->
|
||||||
leveled_log:log("CDB05", [State#state.filename, Reason]),
|
leveled_log:log("CDB05", [State#state.filename, Reason]),
|
||||||
case {State#state.handle, StateName} of
|
case {State#state.handle, StateName, State#state.waste_path} of
|
||||||
{undefined, _} ->
|
{undefined, _, _} ->
|
||||||
ok;
|
ok;
|
||||||
{Handle, delete_pending} ->
|
{Handle, delete_pending, undefined} ->
|
||||||
|
ok = file:close(Handle),
|
||||||
|
ok = file:delete(State#state.filename);
|
||||||
|
{Handle, delete_pending, WasteFP} ->
|
||||||
file:close(Handle),
|
file:close(Handle),
|
||||||
file:delete(State#state.filename);
|
Components = filename:split(State#state.filename),
|
||||||
{Handle, _} ->
|
NewName = WasteFP ++ lists:last(Components),
|
||||||
|
file:rename(State#state.filename, NewName);
|
||||||
|
{Handle, _, _} ->
|
||||||
file:close(Handle)
|
file:close(Handle)
|
||||||
end.
|
end.
|
||||||
|
|
||||||
|
@ -907,12 +893,13 @@ startup_scan_over_file(Handle, Position) ->
|
||||||
%% cdb file, and returns at the end the hashtree and the final Key seen in the
|
%% cdb file, and returns at the end the hashtree and the final Key seen in the
|
||||||
%% journal
|
%% journal
|
||||||
|
|
||||||
startup_filter(Key, ValueAsBin, Position, {Hashtree, LastKey}, _ExtractFun) ->
|
startup_filter(Key, ValueAsBin, Position, {Hashtree, _LastKey}, _ExtractFun) ->
|
||||||
case crccheck_value(ValueAsBin) of
|
case crccheck_value(ValueAsBin) of
|
||||||
true ->
|
true ->
|
||||||
{loop, {put_hashtree(Key, Position, Hashtree), Key}};
|
% This function is preceeded by a "safe read" of the key and value
|
||||||
false ->
|
% and so the crccheck should always be true, as a failed check
|
||||||
{stop, {Hashtree, LastKey}}
|
% should not reach this stage
|
||||||
|
{loop, {put_hashtree(Key, Position, Hashtree), Key}}
|
||||||
end.
|
end.
|
||||||
|
|
||||||
|
|
||||||
|
@ -1106,9 +1093,9 @@ search_hash_table(Handle, [Entry|RestOfEntries], Hash, Key, QuickCheck) ->
|
||||||
_ ->
|
_ ->
|
||||||
KV
|
KV
|
||||||
end;
|
end;
|
||||||
0 ->
|
%0 ->
|
||||||
% Hash is 0 so key must be missing as 0 found before Hash matched
|
% % Hash is 0 so key must be missing as 0 found before Hash matched
|
||||||
missing;
|
% missing;
|
||||||
_ ->
|
_ ->
|
||||||
search_hash_table(Handle, RestOfEntries, Hash, Key, QuickCheck)
|
search_hash_table(Handle, RestOfEntries, Hash, Key, QuickCheck)
|
||||||
end.
|
end.
|
||||||
|
@ -1344,14 +1331,10 @@ dump(FileName) ->
|
||||||
case read_next_term(Handle, VL, crc) of
|
case read_next_term(Handle, VL, crc) of
|
||||||
{_, Value} ->
|
{_, Value} ->
|
||||||
{ok, CurrLoc} = file:position(Handle, cur),
|
{ok, CurrLoc} = file:position(Handle, cur),
|
||||||
Return =
|
{Key,Value} = get(Handle, Key)
|
||||||
case get(Handle, Key) of
|
|
||||||
{Key,Value} -> {Key ,Value};
|
|
||||||
X -> {wonky, X}
|
|
||||||
end
|
|
||||||
end,
|
end,
|
||||||
{ok, _} = file:position(Handle, CurrLoc),
|
{ok, _} = file:position(Handle, CurrLoc),
|
||||||
[Return | Acc]
|
[{Key,Value} | Acc]
|
||||||
end,
|
end,
|
||||||
lists:foldr(Fn1, [], lists:seq(0, NumberOfPairs-1)).
|
lists:foldr(Fn1, [], lists:seq(0, NumberOfPairs-1)).
|
||||||
|
|
||||||
|
@ -1699,18 +1682,29 @@ get_keys_byposition_manykeys_test() ->
|
||||||
#cdb_options{binary_mode=false}),
|
#cdb_options{binary_mode=false}),
|
||||||
KVList = generate_sequentialkeys(KeyCount, []),
|
KVList = generate_sequentialkeys(KeyCount, []),
|
||||||
lists:foreach(fun({K, V}) -> cdb_put(P1, K, V) end, KVList),
|
lists:foreach(fun({K, V}) -> cdb_put(P1, K, V) end, KVList),
|
||||||
SW1 = os:timestamp(),
|
ok = cdb_roll(P1),
|
||||||
|
% Should not return posiitons when rolling
|
||||||
|
?assertMatch([], cdb_getpositions(P1, 10)),
|
||||||
|
lists:foldl(fun(X, Complete) ->
|
||||||
|
case Complete of
|
||||||
|
true ->
|
||||||
|
true;
|
||||||
|
false ->
|
||||||
|
case cdb_checkhashtable(P1) of
|
||||||
|
true ->
|
||||||
|
true;
|
||||||
|
false ->
|
||||||
|
timer:sleep(X),
|
||||||
|
false
|
||||||
|
end
|
||||||
|
end end,
|
||||||
|
false,
|
||||||
|
lists:seq(1, 20)),
|
||||||
|
?assertMatch(10, length(cdb_getpositions(P1, 10))),
|
||||||
{ok, F2} = cdb_complete(P1),
|
{ok, F2} = cdb_complete(P1),
|
||||||
SW2 = os:timestamp(),
|
|
||||||
io:format("CDB completed in ~w microseconds~n",
|
|
||||||
[timer:now_diff(SW2, SW1)]),
|
|
||||||
{ok, P2} = cdb_open_reader(F2, #cdb_options{binary_mode=false}),
|
{ok, P2} = cdb_open_reader(F2, #cdb_options{binary_mode=false}),
|
||||||
SW3 = os:timestamp(),
|
|
||||||
io:format("CDB opened for read in ~w microseconds~n",
|
|
||||||
[timer:now_diff(SW3, SW2)]),
|
|
||||||
PositionList = cdb_getpositions(P2, all),
|
PositionList = cdb_getpositions(P2, all),
|
||||||
io:format("Positions fetched in ~w microseconds~n",
|
|
||||||
[timer:now_diff(os:timestamp(), SW3)]),
|
|
||||||
L1 = length(PositionList),
|
L1 = length(PositionList),
|
||||||
?assertMatch(L1, KeyCount),
|
?assertMatch(L1, KeyCount),
|
||||||
|
|
||||||
|
@ -1776,6 +1770,49 @@ state_test() ->
|
||||||
?assertMatch({"Key1", "Value1"}, cdb_get(P1, "Key1")),
|
?assertMatch({"Key1", "Value1"}, cdb_get(P1, "Key1")),
|
||||||
ok = cdb_close(P1).
|
ok = cdb_close(P1).
|
||||||
|
|
||||||
|
hashclash_test() ->
|
||||||
|
{ok, P1} = cdb_open_writer("../test/hashclash_test.pnd",
|
||||||
|
#cdb_options{binary_mode=false}),
|
||||||
|
Key1 = "Key4184465780",
|
||||||
|
Key99 = "Key4254669179",
|
||||||
|
KeyNF = "Key9070567319",
|
||||||
|
?assertMatch(22, hash(Key1)),
|
||||||
|
?assertMatch(22, hash(Key99)),
|
||||||
|
?assertMatch(22, hash(KeyNF)),
|
||||||
|
|
||||||
|
ok = cdb_mput(P1, [{Key1, 1}, {Key99, 99}]),
|
||||||
|
|
||||||
|
?assertMatch(probably, cdb_keycheck(P1, Key1)),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P1, Key99)),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P1, KeyNF)),
|
||||||
|
|
||||||
|
?assertMatch({Key1, 1}, cdb_get(P1, Key1)),
|
||||||
|
?assertMatch({Key99, 99}, cdb_get(P1, Key99)),
|
||||||
|
?assertMatch(missing, cdb_get(P1, KeyNF)),
|
||||||
|
|
||||||
|
{ok, FN} = cdb_complete(P1),
|
||||||
|
{ok, P2} = cdb_open_reader(FN),
|
||||||
|
|
||||||
|
?assertMatch(probably, cdb_keycheck(P2, Key1)),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P2, Key99)),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P2, KeyNF)),
|
||||||
|
|
||||||
|
?assertMatch({Key1, 1}, cdb_get(P2, Key1)),
|
||||||
|
?assertMatch({Key99, 99}, cdb_get(P2, Key99)),
|
||||||
|
?assertMatch(missing, cdb_get(P2, KeyNF)),
|
||||||
|
|
||||||
|
ok = cdb_deletepending(P2),
|
||||||
|
|
||||||
|
?assertMatch(probably, cdb_keycheck(P2, Key1)),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P2, Key99)),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P2, KeyNF)),
|
||||||
|
|
||||||
|
?assertMatch({Key1, 1}, cdb_get(P2, Key1)),
|
||||||
|
?assertMatch({Key99, 99}, cdb_get(P2, Key99)),
|
||||||
|
?assertMatch(missing, cdb_get(P2, KeyNF)),
|
||||||
|
|
||||||
|
ok = cdb_close(P2).
|
||||||
|
|
||||||
corruptfile_test() ->
|
corruptfile_test() ->
|
||||||
file:delete("../test/corrupt_test.pnd"),
|
file:delete("../test/corrupt_test.pnd"),
|
||||||
{ok, P1} = cdb_open_writer("../test/corrupt_test.pnd",
|
{ok, P1} = cdb_open_writer("../test/corrupt_test.pnd",
|
||||||
|
@ -1790,7 +1827,7 @@ corruptfile_test() ->
|
||||||
lists:foreach(fun(Offset) -> corrupt_testfile_at_offset(Offset) end,
|
lists:foreach(fun(Offset) -> corrupt_testfile_at_offset(Offset) end,
|
||||||
lists:seq(1, 40)),
|
lists:seq(1, 40)),
|
||||||
ok = file:delete("../test/corrupt_test.pnd").
|
ok = file:delete("../test/corrupt_test.pnd").
|
||||||
|
|
||||||
corrupt_testfile_at_offset(Offset) ->
|
corrupt_testfile_at_offset(Offset) ->
|
||||||
{ok, F1} = file:open("../test/corrupt_test.pnd", ?WRITE_OPS),
|
{ok, F1} = file:open("../test/corrupt_test.pnd", ?WRITE_OPS),
|
||||||
{ok, EofPos} = file:position(F1, eof),
|
{ok, EofPos} = file:position(F1, eof),
|
||||||
|
@ -1806,4 +1843,39 @@ corrupt_testfile_at_offset(Offset) ->
|
||||||
?assertMatch({"Key100", "Value100"}, cdb_get(P2, "Key100")),
|
?assertMatch({"Key100", "Value100"}, cdb_get(P2, "Key100")),
|
||||||
ok = cdb_close(P2).
|
ok = cdb_close(P2).
|
||||||
|
|
||||||
|
crc_corrupt_writer_test() ->
|
||||||
|
file:delete("../test/corruptwrt_test.pnd"),
|
||||||
|
{ok, P1} = cdb_open_writer("../test/corruptwrt_test.pnd",
|
||||||
|
#cdb_options{binary_mode=false}),
|
||||||
|
KVList = generate_sequentialkeys(100, []),
|
||||||
|
ok = cdb_mput(P1, KVList),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P1, "Key1")),
|
||||||
|
?assertMatch({"Key1", "Value1"}, cdb_get(P1, "Key1")),
|
||||||
|
?assertMatch({"Key100", "Value100"}, cdb_get(P1, "Key100")),
|
||||||
|
ok = cdb_close(P1),
|
||||||
|
{ok, Handle} = file:open("../test/corruptwrt_test.pnd", ?WRITE_OPS),
|
||||||
|
{ok, EofPos} = file:position(Handle, eof),
|
||||||
|
% zero the last byte of the last value
|
||||||
|
ok = file:pwrite(Handle, EofPos - 5, <<0:8/integer>>),
|
||||||
|
ok = file:close(Handle),
|
||||||
|
{ok, P2} = cdb_open_writer("../test/corruptwrt_test.pnd",
|
||||||
|
#cdb_options{binary_mode=false}),
|
||||||
|
?assertMatch(probably, cdb_keycheck(P2, "Key1")),
|
||||||
|
?assertMatch({"Key1", "Value1"}, cdb_get(P2, "Key1")),
|
||||||
|
?assertMatch(missing, cdb_get(P2, "Key100")),
|
||||||
|
ok = cdb_put(P2, "Key100", "Value100"),
|
||||||
|
?assertMatch({"Key100", "Value100"}, cdb_get(P2, "Key100")),
|
||||||
|
ok = cdb_close(P2).
|
||||||
|
|
||||||
|
nonsense_coverage_test() ->
|
||||||
|
{ok, Pid} = gen_fsm:start(?MODULE, [#cdb_options{}], []),
|
||||||
|
ok = gen_fsm:send_all_state_event(Pid, nonsense),
|
||||||
|
?assertMatch({next_state, reader, #state{}}, handle_info(nonsense,
|
||||||
|
reader,
|
||||||
|
#state{})),
|
||||||
|
?assertMatch({ok, reader, #state{}}, code_change(nonsense,
|
||||||
|
reader,
|
||||||
|
#state{},
|
||||||
|
nonsense)).
|
||||||
|
|
||||||
-endif.
|
-endif.
|
||||||
|
|
|
@ -69,8 +69,9 @@
|
||||||
%% https://github.com/afiskon/erlang-uuid-v4/blob/master/src/uuid.erl
|
%% https://github.com/afiskon/erlang-uuid-v4/blob/master/src/uuid.erl
|
||||||
generate_uuid() ->
|
generate_uuid() ->
|
||||||
<<A:32, B:16, C:16, D:16, E:48>> = crypto:rand_bytes(16),
|
<<A:32, B:16, C:16, D:16, E:48>> = crypto:rand_bytes(16),
|
||||||
io_lib:format("~8.16.0b-~4.16.0b-4~3.16.0b-~4.16.0b-~12.16.0b",
|
L = io_lib:format("~8.16.0b-~4.16.0b-4~3.16.0b-~4.16.0b-~12.16.0b",
|
||||||
[A, B, C band 16#0fff, D band 16#3fff bor 16#8000, E]).
|
[A, B, C band 16#0fff, D band 16#3fff bor 16#8000, E]),
|
||||||
|
binary_to_list(list_to_binary(L)).
|
||||||
|
|
||||||
inker_reload_strategy(AltList) ->
|
inker_reload_strategy(AltList) ->
|
||||||
ReloadStrategy0 = [{?RIAK_TAG, retain}, {?STD_TAG, retain}],
|
ReloadStrategy0 = [{?RIAK_TAG, retain}, {?STD_TAG, retain}],
|
||||||
|
|
|
@ -41,6 +41,17 @@
|
||||||
%% as a way of directly representing a change, and where anti-entropy can
|
%% as a way of directly representing a change, and where anti-entropy can
|
||||||
%% recover from a loss.
|
%% recover from a loss.
|
||||||
%%
|
%%
|
||||||
|
%% -------- Removing Compacted Files ---------
|
||||||
|
%%
|
||||||
|
%% Once a compaction job is complete, and the manifest change has been
|
||||||
|
%% committed, the individual journal files will get a deletion prompt. The
|
||||||
|
%% Journal processes should copy the file to the waste folder, before erasing
|
||||||
|
%% themselves.
|
||||||
|
%%
|
||||||
|
%% The Inker will have a waste duration setting, and before running compaction
|
||||||
|
%% should delete all over-age items (using the file modified date) from the
|
||||||
|
%% waste.
|
||||||
|
%%
|
||||||
%% -------- Tombstone Reaping ---------
|
%% -------- Tombstone Reaping ---------
|
||||||
%%
|
%%
|
||||||
%% Value compaction does not remove tombstones from the database, and so a
|
%% Value compaction does not remove tombstones from the database, and so a
|
||||||
|
@ -54,7 +65,7 @@
|
||||||
%% before the tombstone. If no ushc objects exist for that tombstone, it can
|
%% before the tombstone. If no ushc objects exist for that tombstone, it can
|
||||||
%% now be reaped as part of the compaction job.
|
%% now be reaped as part of the compaction job.
|
||||||
%%
|
%%
|
||||||
%% Other tombstones cannot be reaped, as otherwis eon laoding a ledger an old
|
%% Other tombstones cannot be reaped, as otherwise on laoding a ledger an old
|
||||||
%% version of the object may re-emerge.
|
%% version of the object may re-emerge.
|
||||||
|
|
||||||
-module(leveled_iclerk).
|
-module(leveled_iclerk).
|
||||||
|
@ -88,10 +99,13 @@
|
||||||
-define(MAXRUN_COMPACTION_TARGET, 80.0).
|
-define(MAXRUN_COMPACTION_TARGET, 80.0).
|
||||||
-define(CRC_SIZE, 4).
|
-define(CRC_SIZE, 4).
|
||||||
-define(DEFAULT_RELOAD_STRATEGY, leveled_codec:inker_reload_strategy([])).
|
-define(DEFAULT_RELOAD_STRATEGY, leveled_codec:inker_reload_strategy([])).
|
||||||
|
-define(DEFAULT_WASTE_RETENTION_PERIOD, 86400).
|
||||||
|
|
||||||
-record(state, {inker :: pid(),
|
-record(state, {inker :: pid(),
|
||||||
max_run_length :: integer(),
|
max_run_length :: integer(),
|
||||||
cdb_options,
|
cdb_options,
|
||||||
|
waste_retention_period :: integer(),
|
||||||
|
waste_path :: string(),
|
||||||
reload_strategy = ?DEFAULT_RELOAD_STRATEGY :: list()}).
|
reload_strategy = ?DEFAULT_RELOAD_STRATEGY :: list()}).
|
||||||
|
|
||||||
-record(candidate, {low_sqn :: integer(),
|
-record(candidate, {low_sqn :: integer(),
|
||||||
|
@ -129,32 +143,41 @@ clerk_stop(Pid) ->
|
||||||
|
|
||||||
init([IClerkOpts]) ->
|
init([IClerkOpts]) ->
|
||||||
ReloadStrategy = IClerkOpts#iclerk_options.reload_strategy,
|
ReloadStrategy = IClerkOpts#iclerk_options.reload_strategy,
|
||||||
case IClerkOpts#iclerk_options.max_run_length of
|
CDBopts = IClerkOpts#iclerk_options.cdb_options,
|
||||||
undefined ->
|
WP = CDBopts#cdb_options.waste_path,
|
||||||
{ok, #state{max_run_length = ?MAX_COMPACTION_RUN,
|
WRP = case IClerkOpts#iclerk_options.waste_retention_period of
|
||||||
|
undefined ->
|
||||||
|
?DEFAULT_WASTE_RETENTION_PERIOD;
|
||||||
|
WRP0 ->
|
||||||
|
WRP0
|
||||||
|
end,
|
||||||
|
MRL = case IClerkOpts#iclerk_options.max_run_length of
|
||||||
|
undefined ->
|
||||||
|
?MAX_COMPACTION_RUN;
|
||||||
|
MRL0 ->
|
||||||
|
MRL0
|
||||||
|
end,
|
||||||
|
|
||||||
|
{ok, #state{max_run_length = MRL,
|
||||||
inker = IClerkOpts#iclerk_options.inker,
|
inker = IClerkOpts#iclerk_options.inker,
|
||||||
cdb_options = IClerkOpts#iclerk_options.cdb_options,
|
cdb_options = CDBopts,
|
||||||
reload_strategy = ReloadStrategy}};
|
reload_strategy = ReloadStrategy,
|
||||||
MRL ->
|
waste_path = WP,
|
||||||
{ok, #state{max_run_length = MRL,
|
waste_retention_period = WRP}}.
|
||||||
inker = IClerkOpts#iclerk_options.inker,
|
|
||||||
cdb_options = IClerkOpts#iclerk_options.cdb_options,
|
|
||||||
reload_strategy = ReloadStrategy}}
|
|
||||||
end.
|
|
||||||
|
|
||||||
handle_call(_Msg, _From, State) ->
|
handle_call(_Msg, _From, State) ->
|
||||||
{reply, not_supported, State}.
|
{reply, not_supported, State}.
|
||||||
|
|
||||||
handle_cast({compact, Checker, InitiateFun, FilterFun, Inker, _Timeout},
|
handle_cast({compact, Checker, InitiateFun, FilterFun, Inker, _Timeout},
|
||||||
State) ->
|
State) ->
|
||||||
|
% Empty the waste folder
|
||||||
|
clear_waste(State),
|
||||||
% Need to fetch manifest at start rather than have it be passed in
|
% Need to fetch manifest at start rather than have it be passed in
|
||||||
% Don't want to process a queued call waiting on an old manifest
|
% Don't want to process a queued call waiting on an old manifest
|
||||||
[_Active|Manifest] = leveled_inker:ink_getmanifest(Inker),
|
[_Active|Manifest] = leveled_inker:ink_getmanifest(Inker),
|
||||||
MaxRunLength = State#state.max_run_length,
|
MaxRunLength = State#state.max_run_length,
|
||||||
{FilterServer, MaxSQN} = InitiateFun(Checker),
|
{FilterServer, MaxSQN} = InitiateFun(Checker),
|
||||||
CDBopts = State#state.cdb_options,
|
CDBopts = State#state.cdb_options,
|
||||||
FP = CDBopts#cdb_options.file_path,
|
|
||||||
ok = filelib:ensure_dir(FP),
|
|
||||||
|
|
||||||
Candidates = scan_all_files(Manifest, FilterFun, FilterServer, MaxSQN),
|
Candidates = scan_all_files(Manifest, FilterFun, FilterServer, MaxSQN),
|
||||||
BestRun0 = assess_candidates(Candidates, MaxRunLength),
|
BestRun0 = assess_candidates(Candidates, MaxRunLength),
|
||||||
|
@ -162,13 +185,12 @@ handle_cast({compact, Checker, InitiateFun, FilterFun, Inker, _Timeout},
|
||||||
Score when Score > 0.0 ->
|
Score when Score > 0.0 ->
|
||||||
BestRun1 = sort_run(BestRun0),
|
BestRun1 = sort_run(BestRun0),
|
||||||
print_compaction_run(BestRun1, MaxRunLength),
|
print_compaction_run(BestRun1, MaxRunLength),
|
||||||
{ManifestSlice,
|
ManifestSlice = compact_files(BestRun1,
|
||||||
PromptDelete} = compact_files(BestRun1,
|
CDBopts,
|
||||||
CDBopts,
|
FilterFun,
|
||||||
FilterFun,
|
FilterServer,
|
||||||
FilterServer,
|
MaxSQN,
|
||||||
MaxSQN,
|
State#state.reload_strategy),
|
||||||
State#state.reload_strategy),
|
|
||||||
FilesToDelete = lists:map(fun(C) ->
|
FilesToDelete = lists:map(fun(C) ->
|
||||||
{C#candidate.low_sqn,
|
{C#candidate.low_sqn,
|
||||||
C#candidate.filename,
|
C#candidate.filename,
|
||||||
|
@ -180,12 +202,8 @@ handle_cast({compact, Checker, InitiateFun, FilterFun, Inker, _Timeout},
|
||||||
true ->
|
true ->
|
||||||
update_inker(Inker,
|
update_inker(Inker,
|
||||||
ManifestSlice,
|
ManifestSlice,
|
||||||
FilesToDelete,
|
FilesToDelete),
|
||||||
PromptDelete),
|
{noreply, State}
|
||||||
{noreply, State};
|
|
||||||
false ->
|
|
||||||
leveled_log:log("IC001", []),
|
|
||||||
{stop, normal, State}
|
|
||||||
end;
|
end;
|
||||||
Score ->
|
Score ->
|
||||||
leveled_log:log("IC003", [Score]),
|
leveled_log:log("IC003", [Score]),
|
||||||
|
@ -202,8 +220,10 @@ handle_cast(stop, State) ->
|
||||||
handle_info(_Info, State) ->
|
handle_info(_Info, State) ->
|
||||||
{noreply, State}.
|
{noreply, State}.
|
||||||
|
|
||||||
terminate(_Reason, _State) ->
|
terminate(normal, _State) ->
|
||||||
ok.
|
ok;
|
||||||
|
terminate(Reason, _State) ->
|
||||||
|
leveled_log:log("IC001", [Reason]).
|
||||||
|
|
||||||
code_change(_OldVsn, State, _Extra) ->
|
code_change(_OldVsn, State, _Extra) ->
|
||||||
{ok, State}.
|
{ok, State}.
|
||||||
|
@ -357,24 +377,19 @@ sort_run(RunOfFiles) ->
|
||||||
Cand1#candidate.low_sqn =< Cand2#candidate.low_sqn end,
|
Cand1#candidate.low_sqn =< Cand2#candidate.low_sqn end,
|
||||||
lists:sort(CompareFun, RunOfFiles).
|
lists:sort(CompareFun, RunOfFiles).
|
||||||
|
|
||||||
update_inker(Inker, ManifestSlice, FilesToDelete, PromptDelete) ->
|
update_inker(Inker, ManifestSlice, FilesToDelete) ->
|
||||||
{ok, ManSQN} = leveled_inker:ink_updatemanifest(Inker,
|
{ok, ManSQN} = leveled_inker:ink_updatemanifest(Inker,
|
||||||
ManifestSlice,
|
ManifestSlice,
|
||||||
FilesToDelete),
|
FilesToDelete),
|
||||||
ok = leveled_inker:ink_compactioncomplete(Inker),
|
ok = leveled_inker:ink_compactioncomplete(Inker),
|
||||||
leveled_log:log("IC007", []),
|
leveled_log:log("IC007", []),
|
||||||
case PromptDelete of
|
lists:foreach(fun({_SQN, _FN, J2D}) ->
|
||||||
true ->
|
leveled_cdb:cdb_deletepending(J2D,
|
||||||
lists:foreach(fun({_SQN, _FN, J2D}) ->
|
ManSQN,
|
||||||
leveled_cdb:cdb_deletepending(J2D,
|
Inker)
|
||||||
ManSQN,
|
end,
|
||||||
Inker)
|
FilesToDelete),
|
||||||
end,
|
ok.
|
||||||
FilesToDelete),
|
|
||||||
ok;
|
|
||||||
false ->
|
|
||||||
ok
|
|
||||||
end.
|
|
||||||
|
|
||||||
compact_files(BestRun, CDBopts, FilterFun, FilterServer, MaxSQN, RStrategy) ->
|
compact_files(BestRun, CDBopts, FilterFun, FilterServer, MaxSQN, RStrategy) ->
|
||||||
BatchesOfPositions = get_all_positions(BestRun, []),
|
BatchesOfPositions = get_all_positions(BestRun, []),
|
||||||
|
@ -385,42 +400,34 @@ compact_files(BestRun, CDBopts, FilterFun, FilterServer, MaxSQN, RStrategy) ->
|
||||||
FilterServer,
|
FilterServer,
|
||||||
MaxSQN,
|
MaxSQN,
|
||||||
RStrategy,
|
RStrategy,
|
||||||
[],
|
[]).
|
||||||
true).
|
|
||||||
|
|
||||||
|
|
||||||
compact_files([], _CDBopts, null, _FilterFun, _FilterServer, _MaxSQN,
|
compact_files([], _CDBopts, null, _FilterFun, _FilterServer, _MaxSQN,
|
||||||
_RStrategy, ManSlice0, PromptDelete0) ->
|
_RStrategy, ManSlice0) ->
|
||||||
{ManSlice0, PromptDelete0};
|
ManSlice0;
|
||||||
compact_files([], _CDBopts, ActiveJournal0, _FilterFun, _FilterServer, _MaxSQN,
|
compact_files([], _CDBopts, ActiveJournal0, _FilterFun, _FilterServer, _MaxSQN,
|
||||||
_RStrategy, ManSlice0, PromptDelete0) ->
|
_RStrategy, ManSlice0) ->
|
||||||
ManSlice1 = ManSlice0 ++ generate_manifest_entry(ActiveJournal0),
|
ManSlice1 = ManSlice0 ++ generate_manifest_entry(ActiveJournal0),
|
||||||
{ManSlice1, PromptDelete0};
|
ManSlice1;
|
||||||
compact_files([Batch|T], CDBopts, ActiveJournal0,
|
compact_files([Batch|T], CDBopts, ActiveJournal0,
|
||||||
FilterFun, FilterServer, MaxSQN,
|
FilterFun, FilterServer, MaxSQN,
|
||||||
RStrategy, ManSlice0, PromptDelete0) ->
|
RStrategy, ManSlice0) ->
|
||||||
{SrcJournal, PositionList} = Batch,
|
{SrcJournal, PositionList} = Batch,
|
||||||
KVCs0 = leveled_cdb:cdb_directfetch(SrcJournal,
|
KVCs0 = leveled_cdb:cdb_directfetch(SrcJournal,
|
||||||
PositionList,
|
PositionList,
|
||||||
key_value_check),
|
key_value_check),
|
||||||
R0 = filter_output(KVCs0,
|
KVCs1 = filter_output(KVCs0,
|
||||||
FilterFun,
|
FilterFun,
|
||||||
FilterServer,
|
FilterServer,
|
||||||
MaxSQN,
|
MaxSQN,
|
||||||
RStrategy),
|
RStrategy),
|
||||||
{KVCs1, PromptDelete1} = R0,
|
|
||||||
PromptDelete2 = case {PromptDelete0, PromptDelete1} of
|
|
||||||
{true, true} ->
|
|
||||||
true;
|
|
||||||
_ ->
|
|
||||||
false
|
|
||||||
end,
|
|
||||||
{ActiveJournal1, ManSlice1} = write_values(KVCs1,
|
{ActiveJournal1, ManSlice1} = write_values(KVCs1,
|
||||||
CDBopts,
|
CDBopts,
|
||||||
ActiveJournal0,
|
ActiveJournal0,
|
||||||
ManSlice0),
|
ManSlice0),
|
||||||
compact_files(T, CDBopts, ActiveJournal1, FilterFun, FilterServer, MaxSQN,
|
compact_files(T, CDBopts, ActiveJournal1, FilterFun, FilterServer, MaxSQN,
|
||||||
RStrategy, ManSlice1, PromptDelete2).
|
RStrategy, ManSlice1).
|
||||||
|
|
||||||
get_all_positions([], PositionBatches) ->
|
get_all_positions([], PositionBatches) ->
|
||||||
PositionBatches;
|
PositionBatches;
|
||||||
|
@ -448,28 +455,26 @@ split_positions_into_batches(Positions, Journal, Batches) ->
|
||||||
|
|
||||||
|
|
||||||
filter_output(KVCs, FilterFun, FilterServer, MaxSQN, ReloadStrategy) ->
|
filter_output(KVCs, FilterFun, FilterServer, MaxSQN, ReloadStrategy) ->
|
||||||
lists:foldl(fun(KVC0, {Acc, PromptDelete}) ->
|
lists:foldl(fun(KVC0, Acc) ->
|
||||||
R = leveled_codec:compact_inkerkvc(KVC0, ReloadStrategy),
|
R = leveled_codec:compact_inkerkvc(KVC0, ReloadStrategy),
|
||||||
case R of
|
case R of
|
||||||
skip ->
|
skip ->
|
||||||
{Acc, PromptDelete};
|
Acc;
|
||||||
{TStrat, KVC1} ->
|
{TStrat, KVC1} ->
|
||||||
{K, _V, CrcCheck} = KVC0,
|
{K, _V, CrcCheck} = KVC0,
|
||||||
{SQN, LedgerKey} = leveled_codec:from_journalkey(K),
|
{SQN, LedgerKey} = leveled_codec:from_journalkey(K),
|
||||||
KeyValid = FilterFun(FilterServer, LedgerKey, SQN),
|
KeyValid = FilterFun(FilterServer, LedgerKey, SQN),
|
||||||
case {KeyValid, CrcCheck, SQN > MaxSQN, TStrat} of
|
case {KeyValid, CrcCheck, SQN > MaxSQN, TStrat} of
|
||||||
{true, true, _, _} ->
|
|
||||||
{Acc ++ [KVC0], PromptDelete};
|
|
||||||
{false, true, true, _} ->
|
|
||||||
{Acc ++ [KVC0], PromptDelete};
|
|
||||||
{false, true, false, retain} ->
|
{false, true, false, retain} ->
|
||||||
{Acc ++ [KVC1], PromptDelete};
|
Acc ++ [KVC1];
|
||||||
{false, true, false, _} ->
|
{false, true, false, _} ->
|
||||||
{Acc, PromptDelete}
|
Acc;
|
||||||
|
_ ->
|
||||||
|
Acc ++ [KVC0]
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
end,
|
end,
|
||||||
{[], true},
|
[],
|
||||||
KVCs).
|
KVCs).
|
||||||
|
|
||||||
|
|
||||||
|
@ -511,10 +516,26 @@ generate_manifest_entry(ActiveJournal) ->
|
||||||
[{StartSQN, NewFN, PidR}].
|
[{StartSQN, NewFN, PidR}].
|
||||||
|
|
||||||
|
|
||||||
|
clear_waste(State) ->
|
||||||
|
WP = State#state.waste_path,
|
||||||
|
WRP = State#state.waste_retention_period,
|
||||||
|
{ok, ClearedJournals} = file:list_dir(WP),
|
||||||
|
N = calendar:datetime_to_gregorian_seconds(calendar:local_time()),
|
||||||
|
lists:foreach(fun(DelJ) ->
|
||||||
|
LMD = filelib:last_modified(WP ++ DelJ),
|
||||||
|
case N - calendar:datetime_to_gregorian_seconds(LMD) of
|
||||||
|
LMD_Delta when LMD_Delta >= WRP ->
|
||||||
|
ok = file:delete(WP ++ DelJ),
|
||||||
|
leveled_log:log("IC010", [WP ++ DelJ]);
|
||||||
|
LMD_Delta ->
|
||||||
|
leveled_log:log("IC011", [WP ++ DelJ,
|
||||||
|
LMD_Delta]),
|
||||||
|
ok
|
||||||
|
end
|
||||||
|
end,
|
||||||
|
ClearedJournals).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
%%% Test
|
%%% Test
|
||||||
|
@ -545,6 +566,21 @@ score_compare_test() ->
|
||||||
?assertMatch(Run1, choose_best_assessment(Run1, Run2, 4)),
|
?assertMatch(Run1, choose_best_assessment(Run1, Run2, 4)),
|
||||||
?assertMatch(Run2, choose_best_assessment(Run1 ++ Run2, Run2, 4)).
|
?assertMatch(Run2, choose_best_assessment(Run1 ++ Run2, Run2, 4)).
|
||||||
|
|
||||||
|
file_gc_test() ->
|
||||||
|
State = #state{waste_path="test/waste/",
|
||||||
|
waste_retention_period=1},
|
||||||
|
ok = filelib:ensure_dir(State#state.waste_path),
|
||||||
|
file:write_file(State#state.waste_path ++ "1.cdb", term_to_binary("Hello")),
|
||||||
|
timer:sleep(1100),
|
||||||
|
file:write_file(State#state.waste_path ++ "2.cdb", term_to_binary("Hello")),
|
||||||
|
clear_waste(State),
|
||||||
|
{ok, ClearedJournals} = file:list_dir(State#state.waste_path),
|
||||||
|
?assertMatch(["2.cdb"], ClearedJournals),
|
||||||
|
timer:sleep(1100),
|
||||||
|
clear_waste(State),
|
||||||
|
{ok, ClearedJournals2} = file:list_dir(State#state.waste_path),
|
||||||
|
?assertMatch([], ClearedJournals2).
|
||||||
|
|
||||||
find_bestrun_test() ->
|
find_bestrun_test() ->
|
||||||
%% Tests dependent on these defaults
|
%% Tests dependent on these defaults
|
||||||
%% -define(MAX_COMPACTION_RUN, 4).
|
%% -define(MAX_COMPACTION_RUN, 4).
|
||||||
|
@ -680,15 +716,12 @@ compact_single_file_recovr_test() ->
|
||||||
LedgerFun1,
|
LedgerFun1,
|
||||||
CompactFP,
|
CompactFP,
|
||||||
CDB} = compact_single_file_setup(),
|
CDB} = compact_single_file_setup(),
|
||||||
R1 = compact_files([Candidate],
|
[{LowSQN, FN, PidR}] = compact_files([Candidate],
|
||||||
#cdb_options{file_path=CompactFP},
|
#cdb_options{file_path=CompactFP},
|
||||||
LedgerFun1,
|
LedgerFun1,
|
||||||
LedgerSrv1,
|
LedgerSrv1,
|
||||||
9,
|
9,
|
||||||
[{?STD_TAG, recovr}]),
|
[{?STD_TAG, recovr}]),
|
||||||
{ManSlice1, PromptDelete1} = R1,
|
|
||||||
?assertMatch(true, PromptDelete1),
|
|
||||||
[{LowSQN, FN, PidR}] = ManSlice1,
|
|
||||||
io:format("FN of ~s~n", [FN]),
|
io:format("FN of ~s~n", [FN]),
|
||||||
?assertMatch(2, LowSQN),
|
?assertMatch(2, LowSQN),
|
||||||
?assertMatch(probably,
|
?assertMatch(probably,
|
||||||
|
@ -719,15 +752,12 @@ compact_single_file_retain_test() ->
|
||||||
LedgerFun1,
|
LedgerFun1,
|
||||||
CompactFP,
|
CompactFP,
|
||||||
CDB} = compact_single_file_setup(),
|
CDB} = compact_single_file_setup(),
|
||||||
R1 = compact_files([Candidate],
|
[{LowSQN, FN, PidR}] = compact_files([Candidate],
|
||||||
#cdb_options{file_path=CompactFP},
|
#cdb_options{file_path=CompactFP},
|
||||||
LedgerFun1,
|
LedgerFun1,
|
||||||
LedgerSrv1,
|
LedgerSrv1,
|
||||||
9,
|
9,
|
||||||
[{?STD_TAG, retain}]),
|
[{?STD_TAG, retain}]),
|
||||||
{ManSlice1, PromptDelete1} = R1,
|
|
||||||
?assertMatch(true, PromptDelete1),
|
|
||||||
[{LowSQN, FN, PidR}] = ManSlice1,
|
|
||||||
io:format("FN of ~s~n", [FN]),
|
io:format("FN of ~s~n", [FN]),
|
||||||
?assertMatch(1, LowSQN),
|
?assertMatch(1, LowSQN),
|
||||||
?assertMatch(probably,
|
?assertMatch(probably,
|
||||||
|
@ -798,14 +828,13 @@ compact_singlefile_totwosmallfiles_test() ->
|
||||||
compaction_perc=50.0}],
|
compaction_perc=50.0}],
|
||||||
FakeFilterFun = fun(_FS, _LK, SQN) -> SQN rem 2 == 0 end,
|
FakeFilterFun = fun(_FS, _LK, SQN) -> SQN rem 2 == 0 end,
|
||||||
|
|
||||||
{ManifestSlice, PromptDelete} = compact_files(BestRun1,
|
ManifestSlice = compact_files(BestRun1,
|
||||||
CDBoptsSmall,
|
CDBoptsSmall,
|
||||||
FakeFilterFun,
|
FakeFilterFun,
|
||||||
null,
|
null,
|
||||||
900,
|
900,
|
||||||
[{?STD_TAG, recovr}]),
|
[{?STD_TAG, recovr}]),
|
||||||
?assertMatch(2, length(ManifestSlice)),
|
?assertMatch(2, length(ManifestSlice)),
|
||||||
?assertMatch(true, PromptDelete),
|
|
||||||
lists:foreach(fun({_SQN, _FN, CDB}) ->
|
lists:foreach(fun({_SQN, _FN, CDB}) ->
|
||||||
ok = leveled_cdb:cdb_deletepending(CDB),
|
ok = leveled_cdb:cdb_deletepending(CDB),
|
||||||
ok = leveled_cdb:cdb_destroy(CDB)
|
ok = leveled_cdb:cdb_destroy(CDB)
|
||||||
|
@ -813,6 +842,11 @@ compact_singlefile_totwosmallfiles_test() ->
|
||||||
ManifestSlice),
|
ManifestSlice),
|
||||||
ok = leveled_cdb:cdb_deletepending(CDBr),
|
ok = leveled_cdb:cdb_deletepending(CDBr),
|
||||||
ok = leveled_cdb:cdb_destroy(CDBr).
|
ok = leveled_cdb:cdb_destroy(CDBr).
|
||||||
|
|
||||||
|
coverage_cheat_test() ->
|
||||||
|
{noreply, _State0} = handle_info(timeout, #state{}),
|
||||||
|
{ok, _State1} = code_change(null, #state{}, null),
|
||||||
|
{reply, not_supported, _State2} = handle_call(null, null, #state{}),
|
||||||
|
terminate(error, #state{}).
|
||||||
|
|
||||||
-endif.
|
-endif.
|
|
@ -108,6 +108,7 @@
|
||||||
ink_updatemanifest/3,
|
ink_updatemanifest/3,
|
||||||
ink_print_manifest/1,
|
ink_print_manifest/1,
|
||||||
ink_close/1,
|
ink_close/1,
|
||||||
|
ink_doom/1,
|
||||||
build_dummy_journal/0,
|
build_dummy_journal/0,
|
||||||
simple_manifest_reader/2,
|
simple_manifest_reader/2,
|
||||||
clean_testdir/1,
|
clean_testdir/1,
|
||||||
|
@ -119,6 +120,7 @@
|
||||||
-define(MANIFEST_FP, "journal_manifest").
|
-define(MANIFEST_FP, "journal_manifest").
|
||||||
-define(FILES_FP, "journal_files").
|
-define(FILES_FP, "journal_files").
|
||||||
-define(COMPACT_FP, "post_compact").
|
-define(COMPACT_FP, "post_compact").
|
||||||
|
-define(WASTE_FP, "waste").
|
||||||
-define(JOURNAL_FILEX, "cdb").
|
-define(JOURNAL_FILEX, "cdb").
|
||||||
-define(MANIFEST_FILEX, "man").
|
-define(MANIFEST_FILEX, "man").
|
||||||
-define(PENDING_FILEX, "pnd").
|
-define(PENDING_FILEX, "pnd").
|
||||||
|
@ -162,14 +164,18 @@ ink_registersnapshot(Pid, Requestor) ->
|
||||||
gen_server:call(Pid, {register_snapshot, Requestor}, infinity).
|
gen_server:call(Pid, {register_snapshot, Requestor}, infinity).
|
||||||
|
|
||||||
ink_releasesnapshot(Pid, Snapshot) ->
|
ink_releasesnapshot(Pid, Snapshot) ->
|
||||||
gen_server:call(Pid, {release_snapshot, Snapshot}, infinity).
|
gen_server:cast(Pid, {release_snapshot, Snapshot}).
|
||||||
|
|
||||||
ink_confirmdelete(Pid, ManSQN) ->
|
ink_confirmdelete(Pid, ManSQN) ->
|
||||||
gen_server:call(Pid, {confirm_delete, ManSQN}, 1000).
|
io:format("Confirm delete request received~n"),
|
||||||
|
gen_server:call(Pid, {confirm_delete, ManSQN}).
|
||||||
|
|
||||||
ink_close(Pid) ->
|
ink_close(Pid) ->
|
||||||
gen_server:call(Pid, close, infinity).
|
gen_server:call(Pid, close, infinity).
|
||||||
|
|
||||||
|
ink_doom(Pid) ->
|
||||||
|
gen_server:call(Pid, doom, 60000).
|
||||||
|
|
||||||
ink_loadpcl(Pid, MinSQN, FilterFun, Penciller) ->
|
ink_loadpcl(Pid, MinSQN, FilterFun, Penciller) ->
|
||||||
gen_server:call(Pid, {load_pcl, MinSQN, FilterFun, Penciller}, infinity).
|
gen_server:call(Pid, {load_pcl, MinSQN, FilterFun, Penciller}, infinity).
|
||||||
|
|
||||||
|
@ -266,12 +272,8 @@ handle_call({register_snapshot, Requestor}, _From , State) ->
|
||||||
{reply, {State#state.manifest,
|
{reply, {State#state.manifest,
|
||||||
State#state.active_journaldb},
|
State#state.active_journaldb},
|
||||||
State#state{registered_snapshots=Rs}};
|
State#state{registered_snapshots=Rs}};
|
||||||
handle_call({release_snapshot, Snapshot}, _From , State) ->
|
|
||||||
Rs = lists:keydelete(Snapshot, 1, State#state.registered_snapshots),
|
|
||||||
leveled_log:log("I0003", [Snapshot]),
|
|
||||||
leveled_log:log("I0004", [length(Rs)]),
|
|
||||||
{reply, ok, State#state{registered_snapshots=Rs}};
|
|
||||||
handle_call({confirm_delete, ManSQN}, _From, State) ->
|
handle_call({confirm_delete, ManSQN}, _From, State) ->
|
||||||
|
io:format("Confirm delete request to be processed~n"),
|
||||||
Reply = lists:foldl(fun({_R, SnapSQN}, Bool) ->
|
Reply = lists:foldl(fun({_R, SnapSQN}, Bool) ->
|
||||||
case SnapSQN >= ManSQN of
|
case SnapSQN >= ManSQN of
|
||||||
true ->
|
true ->
|
||||||
|
@ -281,6 +283,7 @@ handle_call({confirm_delete, ManSQN}, _From, State) ->
|
||||||
end end,
|
end end,
|
||||||
true,
|
true,
|
||||||
State#state.registered_snapshots),
|
State#state.registered_snapshots),
|
||||||
|
io:format("Confirm delete request complete with reply ~w~n", [Reply]),
|
||||||
{reply, Reply, State};
|
{reply, Reply, State};
|
||||||
handle_call(get_manifest, _From, State) ->
|
handle_call(get_manifest, _From, State) ->
|
||||||
{reply, State#state.manifest, State};
|
{reply, State#state.manifest, State};
|
||||||
|
@ -325,10 +328,20 @@ handle_call(compaction_complete, _From, State) ->
|
||||||
handle_call(compaction_pending, _From, State) ->
|
handle_call(compaction_pending, _From, State) ->
|
||||||
{reply, State#state.compaction_pending, State};
|
{reply, State#state.compaction_pending, State};
|
||||||
handle_call(close, _From, State) ->
|
handle_call(close, _From, State) ->
|
||||||
{stop, normal, ok, State}.
|
{stop, normal, ok, State};
|
||||||
|
handle_call(doom, _From, State) ->
|
||||||
|
FPs = [filepath(State#state.root_path, journal_dir),
|
||||||
|
filepath(State#state.root_path, manifest_dir),
|
||||||
|
filepath(State#state.root_path, journal_compact_dir),
|
||||||
|
filepath(State#state.root_path, journal_waste_dir)],
|
||||||
|
leveled_log:log("I0018", []),
|
||||||
|
{stop, normal, {ok, FPs}, State}.
|
||||||
|
|
||||||
handle_cast(_Msg, State) ->
|
handle_cast({release_snapshot, Snapshot}, State) ->
|
||||||
{noreply, State}.
|
Rs = lists:keydelete(Snapshot, 1, State#state.registered_snapshots),
|
||||||
|
leveled_log:log("I0003", [Snapshot]),
|
||||||
|
leveled_log:log("I0004", [length(Rs)]),
|
||||||
|
{noreply, State#state{registered_snapshots=Rs}}.
|
||||||
|
|
||||||
handle_info(_Info, State) ->
|
handle_info(_Info, State) ->
|
||||||
{noreply, State}.
|
{noreply, State}.
|
||||||
|
@ -360,20 +373,26 @@ code_change(_OldVsn, State, _Extra) ->
|
||||||
start_from_file(InkOpts) ->
|
start_from_file(InkOpts) ->
|
||||||
RootPath = InkOpts#inker_options.root_path,
|
RootPath = InkOpts#inker_options.root_path,
|
||||||
CDBopts = InkOpts#inker_options.cdb_options,
|
CDBopts = InkOpts#inker_options.cdb_options,
|
||||||
|
|
||||||
JournalFP = filepath(RootPath, journal_dir),
|
JournalFP = filepath(RootPath, journal_dir),
|
||||||
filelib:ensure_dir(JournalFP),
|
filelib:ensure_dir(JournalFP),
|
||||||
CompactFP = filepath(RootPath, journal_compact_dir),
|
CompactFP = filepath(RootPath, journal_compact_dir),
|
||||||
filelib:ensure_dir(CompactFP),
|
filelib:ensure_dir(CompactFP),
|
||||||
|
WasteFP = filepath(RootPath, journal_waste_dir),
|
||||||
|
filelib:ensure_dir(WasteFP),
|
||||||
ManifestFP = filepath(RootPath, manifest_dir),
|
ManifestFP = filepath(RootPath, manifest_dir),
|
||||||
ok = filelib:ensure_dir(ManifestFP),
|
ok = filelib:ensure_dir(ManifestFP),
|
||||||
|
|
||||||
{ok, ManifestFilenames} = file:list_dir(ManifestFP),
|
{ok, ManifestFilenames} = file:list_dir(ManifestFP),
|
||||||
|
|
||||||
IClerkCDBOpts = CDBopts#cdb_options{file_path = CompactFP},
|
IClerkCDBOpts = CDBopts#cdb_options{file_path = CompactFP,
|
||||||
|
waste_path = WasteFP},
|
||||||
ReloadStrategy = InkOpts#inker_options.reload_strategy,
|
ReloadStrategy = InkOpts#inker_options.reload_strategy,
|
||||||
MRL = InkOpts#inker_options.max_run_length,
|
MRL = InkOpts#inker_options.max_run_length,
|
||||||
|
WRP = InkOpts#inker_options.waste_retention_period,
|
||||||
IClerkOpts = #iclerk_options{inker = self(),
|
IClerkOpts = #iclerk_options{inker = self(),
|
||||||
cdb_options=IClerkCDBOpts,
|
cdb_options=IClerkCDBOpts,
|
||||||
|
waste_retention_period = WRP,
|
||||||
reload_strategy = ReloadStrategy,
|
reload_strategy = ReloadStrategy,
|
||||||
max_run_length = MRL},
|
max_run_length = MRL},
|
||||||
{ok, Clerk} = leveled_iclerk:clerk_new(IClerkOpts),
|
{ok, Clerk} = leveled_iclerk:clerk_new(IClerkOpts),
|
||||||
|
@ -389,7 +408,7 @@ start_from_file(InkOpts) ->
|
||||||
journal_sqn = JournalSQN,
|
journal_sqn = JournalSQN,
|
||||||
active_journaldb = ActiveJournal,
|
active_journaldb = ActiveJournal,
|
||||||
root_path = RootPath,
|
root_path = RootPath,
|
||||||
cdb_options = CDBopts,
|
cdb_options = CDBopts#cdb_options{waste_path=WasteFP},
|
||||||
clerk = Clerk}}.
|
clerk = Clerk}}.
|
||||||
|
|
||||||
|
|
||||||
|
@ -670,7 +689,9 @@ filepath(RootPath, journal_dir) ->
|
||||||
filepath(RootPath, manifest_dir) ->
|
filepath(RootPath, manifest_dir) ->
|
||||||
RootPath ++ "/" ++ ?MANIFEST_FP ++ "/";
|
RootPath ++ "/" ++ ?MANIFEST_FP ++ "/";
|
||||||
filepath(RootPath, journal_compact_dir) ->
|
filepath(RootPath, journal_compact_dir) ->
|
||||||
filepath(RootPath, journal_dir) ++ "/" ++ ?COMPACT_FP ++ "/".
|
filepath(RootPath, journal_dir) ++ "/" ++ ?COMPACT_FP ++ "/";
|
||||||
|
filepath(RootPath, journal_waste_dir) ->
|
||||||
|
filepath(RootPath, journal_dir) ++ "/" ++ ?WASTE_FP ++ "/".
|
||||||
|
|
||||||
filepath(RootPath, NewSQN, new_journal) ->
|
filepath(RootPath, NewSQN, new_journal) ->
|
||||||
filename:join(filepath(RootPath, journal_dir),
|
filename:join(filepath(RootPath, journal_dir),
|
||||||
|
@ -747,8 +768,18 @@ build_dummy_journal(KeyConvertF) ->
|
||||||
ok = leveled_cdb:cdb_put(J1, {1, stnd, K1}, term_to_binary({V1, []})),
|
ok = leveled_cdb:cdb_put(J1, {1, stnd, K1}, term_to_binary({V1, []})),
|
||||||
ok = leveled_cdb:cdb_put(J1, {2, stnd, K2}, term_to_binary({V2, []})),
|
ok = leveled_cdb:cdb_put(J1, {2, stnd, K2}, term_to_binary({V2, []})),
|
||||||
ok = leveled_cdb:cdb_roll(J1),
|
ok = leveled_cdb:cdb_roll(J1),
|
||||||
_LK = leveled_cdb:cdb_lastkey(J1),
|
lists:foldl(fun(X, Closed) ->
|
||||||
ok = leveled_cdb:cdb_close(J1),
|
case Closed of
|
||||||
|
true -> true;
|
||||||
|
false ->
|
||||||
|
case leveled_cdb:cdb_checkhashtable(J1) of
|
||||||
|
true -> leveled_cdb:cdb_close(J1), true;
|
||||||
|
false -> timer:sleep(X), false
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end,
|
||||||
|
false,
|
||||||
|
lists:seq(1, 5)),
|
||||||
F2 = filename:join(JournalFP, "nursery_3.pnd"),
|
F2 = filename:join(JournalFP, "nursery_3.pnd"),
|
||||||
{ok, J2} = leveled_cdb:cdb_open_writer(F2),
|
{ok, J2} = leveled_cdb:cdb_open_writer(F2),
|
||||||
{K1, V3} = {KeyConvertF("Key1"), "TestValue3"},
|
{K1, V3} = {KeyConvertF("Key1"), "TestValue3"},
|
||||||
|
@ -888,10 +919,13 @@ empty_manifest_test() ->
|
||||||
{ok, Ink1} = ink_start(#inker_options{root_path=RootPath,
|
{ok, Ink1} = ink_start(#inker_options{root_path=RootPath,
|
||||||
cdb_options=CDBopts}),
|
cdb_options=CDBopts}),
|
||||||
?assertMatch(not_present, ink_fetch(Ink1, "Key1", 1)),
|
?assertMatch(not_present, ink_fetch(Ink1, "Key1", 1)),
|
||||||
|
|
||||||
|
CheckFun = fun(L, K, SQN) -> lists:member({SQN, K}, L) end,
|
||||||
|
?assertMatch(false, CheckFun([], "key", 1)),
|
||||||
ok = ink_compactjournal(Ink1,
|
ok = ink_compactjournal(Ink1,
|
||||||
[],
|
[],
|
||||||
fun(X) -> {X, 55} end,
|
fun(X) -> {X, 55} end,
|
||||||
fun(L, K, SQN) -> lists:member({SQN, K}, L) end,
|
CheckFun,
|
||||||
5000),
|
5000),
|
||||||
timer:sleep(1000),
|
timer:sleep(1000),
|
||||||
?assertMatch(1, length(ink_getmanifest(Ink1))),
|
?assertMatch(1, length(ink_getmanifest(Ink1))),
|
||||||
|
@ -911,6 +945,9 @@ empty_manifest_test() ->
|
||||||
?assertMatch("Value1", V),
|
?assertMatch("Value1", V),
|
||||||
ink_close(Ink2),
|
ink_close(Ink2),
|
||||||
clean_testdir(RootPath).
|
clean_testdir(RootPath).
|
||||||
|
|
||||||
|
coverage_cheat_test() ->
|
||||||
|
{noreply, _State0} = handle_info(timeout, #state{}),
|
||||||
|
{ok, _State1} = code_change(null, #state{}, null).
|
||||||
|
|
||||||
-endif.
|
-endif.
|
|
@ -32,6 +32,14 @@
|
||||||
{info, "Reached end of load batch with SQN ~w"}},
|
{info, "Reached end of load batch with SQN ~w"}},
|
||||||
{"B0007",
|
{"B0007",
|
||||||
{info, "Skipping as exceeded MaxSQN ~w with SQN ~w"}},
|
{info, "Skipping as exceeded MaxSQN ~w with SQN ~w"}},
|
||||||
|
{"B0008",
|
||||||
|
{info, "Bucket list finds no more results"}},
|
||||||
|
{"B0009",
|
||||||
|
{info, "Bucket list finds Bucket ~w"}},
|
||||||
|
{"B0010",
|
||||||
|
{info, "Bucket list finds non-binary Bucket ~w"}},
|
||||||
|
{"B0011",
|
||||||
|
{warn, "Call to destroy the store and so all files to be removed"}},
|
||||||
|
|
||||||
{"P0001",
|
{"P0001",
|
||||||
{info, "Ledger snapshot ~w registered"}},
|
{info, "Ledger snapshot ~w registered"}},
|
||||||
|
@ -52,7 +60,7 @@
|
||||||
{"P0009",
|
{"P0009",
|
||||||
{info, "Level 0 cache empty at close of Penciller"}},
|
{info, "Level 0 cache empty at close of Penciller"}},
|
||||||
{"P0010",
|
{"P0010",
|
||||||
{info, "No level zero action on close of Penciller"}},
|
{info, "No level zero action on close of Penciller ~w"}},
|
||||||
{"P0011",
|
{"P0011",
|
||||||
{info, "Shutdown complete for Penciller"}},
|
{info, "Shutdown complete for Penciller"}},
|
||||||
{"P0012",
|
{"P0012",
|
||||||
|
@ -68,7 +76,8 @@
|
||||||
{"P0017",
|
{"P0017",
|
||||||
{info, "No L0 file found"}},
|
{info, "No L0 file found"}},
|
||||||
{"P0018",
|
{"P0018",
|
||||||
{info, "Respone to push_mem of ~w ~s"}},
|
{info, "Response to push_mem of ~w with "
|
||||||
|
++ "L0 pending ~w and merge backlog ~w"}},
|
||||||
{"P0019",
|
{"P0019",
|
||||||
{info, "Rolling level zero to filename ~s"}},
|
{info, "Rolling level zero to filename ~s"}},
|
||||||
{"P0020",
|
{"P0020",
|
||||||
|
@ -93,6 +102,8 @@
|
||||||
{info, "Adding cleared file ~s to deletion list"}},
|
{info, "Adding cleared file ~s to deletion list"}},
|
||||||
{"P0029",
|
{"P0029",
|
||||||
{info, "L0 completion confirmed and will transition to not pending"}},
|
{info, "L0 completion confirmed and will transition to not pending"}},
|
||||||
|
{"P0030",
|
||||||
|
{warn, "We're doomed - intention recorded to destroy all files"}},
|
||||||
|
|
||||||
{"PC001",
|
{"PC001",
|
||||||
{info, "Penciller's clerk ~w started with owner ~w"}},
|
{info, "Penciller's clerk ~w started with owner ~w"}},
|
||||||
|
@ -161,10 +172,11 @@
|
||||||
{info, "Writing new version of manifest for manifestSQN=~w"}},
|
{info, "Writing new version of manifest for manifestSQN=~w"}},
|
||||||
{"I0017",
|
{"I0017",
|
||||||
{info, "At SQN=~w journal has filename ~s"}},
|
{info, "At SQN=~w journal has filename ~s"}},
|
||||||
|
{"I0018",
|
||||||
|
{warn, "We're doomed - intention recorded to destroy all files"}},
|
||||||
|
|
||||||
{"IC001",
|
{"IC001",
|
||||||
{info, "Inker no longer alive so Clerk to abandon work "
|
{info, "Closed for reason ~w so maybe leaving garbage"}},
|
||||||
++ "leaving garbage"}},
|
|
||||||
{"IC002",
|
{"IC002",
|
||||||
{info, "Clerk updating Inker as compaction complete of ~w files"}},
|
{info, "Clerk updating Inker as compaction complete of ~w files"}},
|
||||||
{"IC003",
|
{"IC003",
|
||||||
|
@ -181,6 +193,10 @@
|
||||||
{info, "Compaction source ~s has yielded ~w positions"}},
|
{info, "Compaction source ~s has yielded ~w positions"}},
|
||||||
{"IC009",
|
{"IC009",
|
||||||
{info, "Generate journal for compaction with filename ~s"}},
|
{info, "Generate journal for compaction with filename ~s"}},
|
||||||
|
{"IC010",
|
||||||
|
{info, "Clearing journal with filename ~s"}},
|
||||||
|
{"IC011",
|
||||||
|
{info, "Not clearing filename ~s as modified delta is only ~w seconds"}},
|
||||||
|
|
||||||
{"PM001",
|
{"PM001",
|
||||||
{info, "Indexed new cache entry with total L0 cache size now ~w"}},
|
{info, "Indexed new cache entry with total L0 cache size now ~w"}},
|
||||||
|
|
|
@ -459,4 +459,7 @@ select_merge_file_test() ->
|
||||||
?assertMatch(FileRef, {{o, "B1", "K1"}, {o, "B3", "K3"}, dummy_pid}),
|
?assertMatch(FileRef, {{o, "B1", "K1"}, {o, "B3", "K3"}, dummy_pid}),
|
||||||
?assertMatch(NewManifest, [{0, []}, {1, L1}]).
|
?assertMatch(NewManifest, [{0, []}, {1, L1}]).
|
||||||
|
|
||||||
|
coverage_cheat_test() ->
|
||||||
|
{ok, _State1} = code_change(null, #state{}, null).
|
||||||
|
|
||||||
-endif.
|
-endif.
|
||||||
|
|
|
@ -169,12 +169,14 @@
|
||||||
pcl_fetchlevelzero/2,
|
pcl_fetchlevelzero/2,
|
||||||
pcl_fetch/2,
|
pcl_fetch/2,
|
||||||
pcl_fetchkeys/5,
|
pcl_fetchkeys/5,
|
||||||
|
pcl_fetchnextkey/5,
|
||||||
pcl_checksequencenumber/3,
|
pcl_checksequencenumber/3,
|
||||||
pcl_workforclerk/1,
|
pcl_workforclerk/1,
|
||||||
pcl_promptmanifestchange/2,
|
pcl_promptmanifestchange/2,
|
||||||
pcl_confirml0complete/4,
|
pcl_confirml0complete/4,
|
||||||
pcl_confirmdelete/2,
|
pcl_confirmdelete/2,
|
||||||
pcl_close/1,
|
pcl_close/1,
|
||||||
|
pcl_doom/1,
|
||||||
pcl_registersnapshot/2,
|
pcl_registersnapshot/2,
|
||||||
pcl_releasesnapshot/2,
|
pcl_releasesnapshot/2,
|
||||||
pcl_loadsnapshot/2,
|
pcl_loadsnapshot/2,
|
||||||
|
@ -218,6 +220,7 @@
|
||||||
is_snapshot = false :: boolean(),
|
is_snapshot = false :: boolean(),
|
||||||
snapshot_fully_loaded = false :: boolean(),
|
snapshot_fully_loaded = false :: boolean(),
|
||||||
source_penciller :: pid(),
|
source_penciller :: pid(),
|
||||||
|
levelzero_astree :: gb_trees:tree(),
|
||||||
|
|
||||||
ongoing_work = [] :: list(),
|
ongoing_work = [] :: list(),
|
||||||
work_backlog = false :: boolean()}).
|
work_backlog = false :: boolean()}).
|
||||||
|
@ -241,14 +244,19 @@ pcl_fetchlevelzero(Pid, Slot) ->
|
||||||
%%
|
%%
|
||||||
%% If the timeout gets hit outside of close scenario the Penciller will
|
%% If the timeout gets hit outside of close scenario the Penciller will
|
||||||
%% be stuck in L0 pending
|
%% be stuck in L0 pending
|
||||||
gen_server:call(Pid, {fetch_levelzero, Slot}, 10000).
|
gen_server:call(Pid, {fetch_levelzero, Slot}, 60000).
|
||||||
|
|
||||||
pcl_fetch(Pid, Key) ->
|
pcl_fetch(Pid, Key) ->
|
||||||
gen_server:call(Pid, {fetch, Key}, infinity).
|
gen_server:call(Pid, {fetch, Key}, infinity).
|
||||||
|
|
||||||
pcl_fetchkeys(Pid, StartKey, EndKey, AccFun, InitAcc) ->
|
pcl_fetchkeys(Pid, StartKey, EndKey, AccFun, InitAcc) ->
|
||||||
gen_server:call(Pid,
|
gen_server:call(Pid,
|
||||||
{fetch_keys, StartKey, EndKey, AccFun, InitAcc},
|
{fetch_keys, StartKey, EndKey, AccFun, InitAcc, -1},
|
||||||
|
infinity).
|
||||||
|
|
||||||
|
pcl_fetchnextkey(Pid, StartKey, EndKey, AccFun, InitAcc) ->
|
||||||
|
gen_server:call(Pid,
|
||||||
|
{fetch_keys, StartKey, EndKey, AccFun, InitAcc, 1},
|
||||||
infinity).
|
infinity).
|
||||||
|
|
||||||
pcl_checksequencenumber(Pid, Key, SQN) ->
|
pcl_checksequencenumber(Pid, Key, SQN) ->
|
||||||
|
@ -282,6 +290,8 @@ pcl_loadsnapshot(Pid, Increment) ->
|
||||||
pcl_close(Pid) ->
|
pcl_close(Pid) ->
|
||||||
gen_server:call(Pid, close, 60000).
|
gen_server:call(Pid, close, 60000).
|
||||||
|
|
||||||
|
pcl_doom(Pid) ->
|
||||||
|
gen_server:call(Pid, doom, 60000).
|
||||||
|
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
%%% gen_server callbacks
|
%%% gen_server callbacks
|
||||||
|
@ -321,15 +331,14 @@ handle_call({push_mem, PushedTree}, From, State=#state{is_snapshot=Snap})
|
||||||
%
|
%
|
||||||
% Check the approximate size of the cache. If it is over the maximum size,
|
% Check the approximate size of the cache. If it is over the maximum size,
|
||||||
% trigger a backgroun L0 file write and update state of levelzero_pending.
|
% trigger a backgroun L0 file write and update state of levelzero_pending.
|
||||||
case {State#state.levelzero_pending, State#state.work_backlog} of
|
case State#state.levelzero_pending or State#state.work_backlog of
|
||||||
{true, _} ->
|
true ->
|
||||||
leveled_log:log("P0018", [returned, "L-0 persist pending"]),
|
leveled_log:log("P0018", [returned,
|
||||||
|
State#state.levelzero_pending,
|
||||||
|
State#state.work_backlog]),
|
||||||
{reply, returned, State};
|
{reply, returned, State};
|
||||||
{false, true} ->
|
false ->
|
||||||
leveled_log:log("P0018", [returned, "Merge tree work backlog"]),
|
leveled_log:log("P0018", [ok, false, false]),
|
||||||
{reply, returned, State};
|
|
||||||
{false, false} ->
|
|
||||||
leveled_log:log("P0018", [ok, "L0 memory updated"]),
|
|
||||||
gen_server:reply(From, ok),
|
gen_server:reply(From, ok),
|
||||||
{noreply, update_levelzero(State#state.levelzero_index,
|
{noreply, update_levelzero(State#state.levelzero_index,
|
||||||
State#state.levelzero_size,
|
State#state.levelzero_size,
|
||||||
|
@ -353,20 +362,29 @@ handle_call({check_sqn, Key, SQN}, _From, State) ->
|
||||||
State#state.levelzero_cache),
|
State#state.levelzero_cache),
|
||||||
SQN),
|
SQN),
|
||||||
State};
|
State};
|
||||||
handle_call({fetch_keys, StartKey, EndKey, AccFun, InitAcc},
|
handle_call({fetch_keys, StartKey, EndKey, AccFun, InitAcc, MaxKeys},
|
||||||
_From,
|
_From,
|
||||||
State=#state{snapshot_fully_loaded=Ready})
|
State=#state{snapshot_fully_loaded=Ready})
|
||||||
when Ready == true ->
|
when Ready == true ->
|
||||||
L0AsTree = leveled_pmem:merge_trees(StartKey,
|
L0AsTree =
|
||||||
EndKey,
|
case State#state.levelzero_astree of
|
||||||
State#state.levelzero_cache,
|
undefined ->
|
||||||
gb_trees:empty()),
|
leveled_pmem:merge_trees(StartKey,
|
||||||
|
EndKey,
|
||||||
|
State#state.levelzero_cache,
|
||||||
|
gb_trees:empty());
|
||||||
|
Tree ->
|
||||||
|
Tree
|
||||||
|
end,
|
||||||
L0iter = gb_trees:iterator(L0AsTree),
|
L0iter = gb_trees:iterator(L0AsTree),
|
||||||
SFTiter = initiate_rangequery_frommanifest(StartKey,
|
SFTiter = initiate_rangequery_frommanifest(StartKey,
|
||||||
EndKey,
|
EndKey,
|
||||||
State#state.manifest),
|
State#state.manifest),
|
||||||
Acc = keyfolder(L0iter, SFTiter, StartKey, EndKey, {AccFun, InitAcc}),
|
Acc = keyfolder({L0iter, SFTiter},
|
||||||
{reply, Acc, State};
|
{StartKey, EndKey},
|
||||||
|
{AccFun, InitAcc},
|
||||||
|
MaxKeys),
|
||||||
|
{reply, Acc, State#state{levelzero_astree = L0AsTree}};
|
||||||
handle_call(work_for_clerk, From, State) ->
|
handle_call(work_for_clerk, From, State) ->
|
||||||
{UpdState, Work} = return_work(State, From),
|
{UpdState, Work} = return_work(State, From),
|
||||||
{reply, Work, UpdState};
|
{reply, Work, UpdState};
|
||||||
|
@ -390,8 +408,12 @@ handle_call({load_snapshot, BookieIncrTree}, _From, State) ->
|
||||||
handle_call({fetch_levelzero, Slot}, _From, State) ->
|
handle_call({fetch_levelzero, Slot}, _From, State) ->
|
||||||
{reply, lists:nth(Slot, State#state.levelzero_cache), State};
|
{reply, lists:nth(Slot, State#state.levelzero_cache), State};
|
||||||
handle_call(close, _From, State) ->
|
handle_call(close, _From, State) ->
|
||||||
{stop, normal, ok, State}.
|
{stop, normal, ok, State};
|
||||||
|
handle_call(doom, _From, State) ->
|
||||||
|
leveled_log:log("P0030", []),
|
||||||
|
ManifestFP = State#state.root_path ++ "/" ++ ?MANIFEST_FP ++ "/",
|
||||||
|
FilesFP = State#state.root_path ++ "/" ++ ?FILES_FP ++ "/",
|
||||||
|
{stop, normal, {ok, [ManifestFP, FilesFP]}, State}.
|
||||||
|
|
||||||
handle_cast({manifest_change, WI}, State) ->
|
handle_cast({manifest_change, WI}, State) ->
|
||||||
{ok, UpdState} = commit_manifest_change(WI, State),
|
{ok, UpdState} = commit_manifest_change(WI, State),
|
||||||
|
@ -478,15 +500,13 @@ terminate(Reason, State) ->
|
||||||
case {UpdState#state.levelzero_pending,
|
case {UpdState#state.levelzero_pending,
|
||||||
get_item(0, UpdState#state.manifest, []),
|
get_item(0, UpdState#state.manifest, []),
|
||||||
UpdState#state.levelzero_size} of
|
UpdState#state.levelzero_size} of
|
||||||
{true, [], _} ->
|
|
||||||
ok = leveled_sft:sft_close(UpdState#state.levelzero_constructor);
|
|
||||||
{false, [], 0} ->
|
{false, [], 0} ->
|
||||||
leveled_log:log("P0009", []);
|
leveled_log:log("P0009", []);
|
||||||
{false, [], _N} ->
|
{false, [], _N} ->
|
||||||
L0Pid = roll_memory(UpdState, true),
|
L0Pid = roll_memory(UpdState, true),
|
||||||
ok = leveled_sft:sft_close(L0Pid);
|
ok = leveled_sft:sft_close(L0Pid);
|
||||||
_ ->
|
StatusTuple ->
|
||||||
leveled_log:log("P0010", [])
|
leveled_log:log("P0010", [StatusTuple])
|
||||||
end,
|
end,
|
||||||
|
|
||||||
% Tidy shutdown of individual files
|
% Tidy shutdown of individual files
|
||||||
|
@ -506,6 +526,7 @@ code_change(_OldVsn, State, _Extra) ->
|
||||||
%%% Internal functions
|
%%% Internal functions
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
|
|
||||||
|
|
||||||
start_from_file(PCLopts) ->
|
start_from_file(PCLopts) ->
|
||||||
RootPath = PCLopts#penciller_options.root_path,
|
RootPath = PCLopts#penciller_options.root_path,
|
||||||
MaxTableSize = case PCLopts#penciller_options.max_inmemory_tablesize of
|
MaxTableSize = case PCLopts#penciller_options.max_inmemory_tablesize of
|
||||||
|
@ -959,37 +980,56 @@ find_nextkey(QueryArray, LCnt, {BestKeyLevel, BestKV}, QueryFunT) ->
|
||||||
end.
|
end.
|
||||||
|
|
||||||
|
|
||||||
keyfolder(null, SFTiterator, StartKey, EndKey, {AccFun, Acc}) ->
|
keyfolder(IMMiter, SFTiter, StartKey, EndKey, {AccFun, Acc}) ->
|
||||||
case find_nextkey(SFTiterator, StartKey, EndKey) of
|
keyfolder({IMMiter, SFTiter}, {StartKey, EndKey}, {AccFun, Acc}, -1).
|
||||||
|
|
||||||
|
keyfolder(_Iterators, _KeyRange, {_AccFun, Acc}, MaxKeys) when MaxKeys == 0 ->
|
||||||
|
Acc;
|
||||||
|
keyfolder({null, SFTiter}, KeyRange, {AccFun, Acc}, MaxKeys) ->
|
||||||
|
{StartKey, EndKey} = KeyRange,
|
||||||
|
case find_nextkey(SFTiter, StartKey, EndKey) of
|
||||||
no_more_keys ->
|
no_more_keys ->
|
||||||
Acc;
|
Acc;
|
||||||
{NxtSFTiterator, {SFTKey, SFTVal}} ->
|
{NxSFTiter, {SFTKey, SFTVal}} ->
|
||||||
Acc1 = AccFun(SFTKey, SFTVal, Acc),
|
Acc1 = AccFun(SFTKey, SFTVal, Acc),
|
||||||
keyfolder(null, NxtSFTiterator, StartKey, EndKey, {AccFun, Acc1})
|
keyfolder({null, NxSFTiter}, KeyRange, {AccFun, Acc1}, MaxKeys - 1)
|
||||||
end;
|
end;
|
||||||
keyfolder(IMMiterator, SFTiterator, StartKey, EndKey, {AccFun, Acc}) ->
|
keyfolder({IMMiterator, SFTiterator}, KeyRange, {AccFun, Acc}, MaxKeys) ->
|
||||||
|
{StartKey, EndKey} = KeyRange,
|
||||||
case gb_trees:next(IMMiterator) of
|
case gb_trees:next(IMMiterator) of
|
||||||
none ->
|
none ->
|
||||||
% There are no more keys in the in-memory iterator, so now
|
% There are no more keys in the in-memory iterator, so now
|
||||||
% iterate only over the remaining keys in the SFT iterator
|
% iterate only over the remaining keys in the SFT iterator
|
||||||
keyfolder(null, SFTiterator, StartKey, EndKey, {AccFun, Acc});
|
keyfolder({null, SFTiterator}, KeyRange, {AccFun, Acc}, MaxKeys);
|
||||||
{IMMKey, IMMVal, NxtIMMiterator} ->
|
{IMMKey, _IMMVal, NxIMMiterator} when IMMKey < StartKey ->
|
||||||
|
% Normally everything is pre-filterd, but the IMM iterator can
|
||||||
|
% be re-used and do may be behind the StartKey if the StartKey has
|
||||||
|
% advanced from the previous use
|
||||||
|
keyfolder({NxIMMiterator, SFTiterator},
|
||||||
|
KeyRange,
|
||||||
|
{AccFun, Acc},
|
||||||
|
MaxKeys);
|
||||||
|
{IMMKey, IMMVal, NxIMMiterator} ->
|
||||||
case leveled_codec:endkey_passed(EndKey, IMMKey) of
|
case leveled_codec:endkey_passed(EndKey, IMMKey) of
|
||||||
true ->
|
true ->
|
||||||
% There are no more keys in-range in the in-memory
|
% There are no more keys in-range in the in-memory
|
||||||
% iterator, so take action as if this iterator is empty
|
% iterator, so take action as if this iterator is empty
|
||||||
% (see above)
|
% (see above)
|
||||||
keyfolder(null, SFTiterator,
|
keyfolder({null, SFTiterator},
|
||||||
StartKey, EndKey, {AccFun, Acc});
|
KeyRange,
|
||||||
|
{AccFun, Acc},
|
||||||
|
MaxKeys);
|
||||||
false ->
|
false ->
|
||||||
case find_nextkey(SFTiterator, StartKey, EndKey) of
|
case find_nextkey(SFTiterator, StartKey, EndKey) of
|
||||||
no_more_keys ->
|
no_more_keys ->
|
||||||
% No more keys in range in the persisted store, so use the
|
% No more keys in range in the persisted store, so use the
|
||||||
% in-memory KV as the next
|
% in-memory KV as the next
|
||||||
Acc1 = AccFun(IMMKey, IMMVal, Acc),
|
Acc1 = AccFun(IMMKey, IMMVal, Acc),
|
||||||
keyfolder(NxtIMMiterator, SFTiterator,
|
keyfolder({NxIMMiterator, SFTiterator},
|
||||||
StartKey, EndKey, {AccFun, Acc1});
|
KeyRange,
|
||||||
{NxtSFTiterator, {SFTKey, SFTVal}} ->
|
{AccFun, Acc1},
|
||||||
|
MaxKeys - 1);
|
||||||
|
{NxSFTiterator, {SFTKey, SFTVal}} ->
|
||||||
% There is a next key, so need to know which is the
|
% There is a next key, so need to know which is the
|
||||||
% next key between the two (and handle two keys
|
% next key between the two (and handle two keys
|
||||||
% with different sequence numbers).
|
% with different sequence numbers).
|
||||||
|
@ -999,19 +1039,22 @@ keyfolder(IMMiterator, SFTiterator, StartKey, EndKey, {AccFun, Acc}) ->
|
||||||
SFTVal}) of
|
SFTVal}) of
|
||||||
left_hand_first ->
|
left_hand_first ->
|
||||||
Acc1 = AccFun(IMMKey, IMMVal, Acc),
|
Acc1 = AccFun(IMMKey, IMMVal, Acc),
|
||||||
keyfolder(NxtIMMiterator, SFTiterator,
|
keyfolder({NxIMMiterator, SFTiterator},
|
||||||
StartKey, EndKey,
|
KeyRange,
|
||||||
{AccFun, Acc1});
|
{AccFun, Acc1},
|
||||||
|
MaxKeys - 1);
|
||||||
right_hand_first ->
|
right_hand_first ->
|
||||||
Acc1 = AccFun(SFTKey, SFTVal, Acc),
|
Acc1 = AccFun(SFTKey, SFTVal, Acc),
|
||||||
keyfolder(IMMiterator, NxtSFTiterator,
|
keyfolder({IMMiterator, NxSFTiterator},
|
||||||
StartKey, EndKey,
|
KeyRange,
|
||||||
{AccFun, Acc1});
|
{AccFun, Acc1},
|
||||||
|
MaxKeys - 1);
|
||||||
left_hand_dominant ->
|
left_hand_dominant ->
|
||||||
Acc1 = AccFun(IMMKey, IMMVal, Acc),
|
Acc1 = AccFun(IMMKey, IMMVal, Acc),
|
||||||
keyfolder(NxtIMMiterator, NxtSFTiterator,
|
keyfolder({NxIMMiterator, NxSFTiterator},
|
||||||
StartKey, EndKey,
|
KeyRange,
|
||||||
{AccFun, Acc1})
|
{AccFun, Acc1},
|
||||||
|
MaxKeys - 1)
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
@ -1576,12 +1619,61 @@ create_file_test() ->
|
||||||
{ok, Bin} = file:read_file("../test/new_file.sft.discarded"),
|
{ok, Bin} = file:read_file("../test/new_file.sft.discarded"),
|
||||||
?assertMatch("hello", binary_to_term(Bin)).
|
?assertMatch("hello", binary_to_term(Bin)).
|
||||||
|
|
||||||
coverage_test() ->
|
commit_manifest_test() ->
|
||||||
|
Sent_WI = #penciller_work{next_sqn=1,
|
||||||
|
src_level=0,
|
||||||
|
start_time=os:timestamp()},
|
||||||
|
Resp_WI = #penciller_work{next_sqn=1,
|
||||||
|
src_level=0},
|
||||||
|
State = #state{ongoing_work = [Sent_WI],
|
||||||
|
root_path = "test",
|
||||||
|
manifest_sqn = 0},
|
||||||
|
ManifestFP = "test" ++ "/" ++ ?MANIFEST_FP ++ "/",
|
||||||
|
ok = filelib:ensure_dir(ManifestFP),
|
||||||
|
ok = file:write_file(ManifestFP ++ "nonzero_1.pnd",
|
||||||
|
term_to_binary("dummy data")),
|
||||||
|
|
||||||
|
L1_0 = [{1, [#manifest_entry{filename="1.sft"}]}],
|
||||||
|
Resp_WI0 = Resp_WI#penciller_work{new_manifest=L1_0,
|
||||||
|
unreferenced_files=[]},
|
||||||
|
{ok, State0} = commit_manifest_change(Resp_WI0, State),
|
||||||
|
?assertMatch(1, State0#state.manifest_sqn),
|
||||||
|
?assertMatch([], get_item(0, State0#state.manifest, [])),
|
||||||
|
|
||||||
|
L0Entry = [#manifest_entry{filename="0.sft"}],
|
||||||
|
ManifestPlus = [{0, L0Entry}|State0#state.manifest],
|
||||||
|
|
||||||
|
NxtSent_WI = #penciller_work{next_sqn=2,
|
||||||
|
src_level=1,
|
||||||
|
start_time=os:timestamp()},
|
||||||
|
NxtResp_WI = #penciller_work{next_sqn=2,
|
||||||
|
src_level=1},
|
||||||
|
State1 = State0#state{ongoing_work=[NxtSent_WI],
|
||||||
|
manifest = ManifestPlus},
|
||||||
|
|
||||||
|
ok = file:write_file(ManifestFP ++ "nonzero_2.pnd",
|
||||||
|
term_to_binary("dummy data")),
|
||||||
|
|
||||||
|
L2_0 = [#manifest_entry{filename="2.sft"}],
|
||||||
|
NxtResp_WI0 = NxtResp_WI#penciller_work{new_manifest=[{2, L2_0}],
|
||||||
|
unreferenced_files=[]},
|
||||||
|
{ok, State2} = commit_manifest_change(NxtResp_WI0, State1),
|
||||||
|
|
||||||
|
?assertMatch(1, State1#state.manifest_sqn),
|
||||||
|
?assertMatch(2, State2#state.manifest_sqn),
|
||||||
|
?assertMatch(L0Entry, get_item(0, State2#state.manifest, [])),
|
||||||
|
?assertMatch(L2_0, get_item(2, State2#state.manifest, [])),
|
||||||
|
|
||||||
|
clean_testdir(State#state.root_path).
|
||||||
|
|
||||||
|
|
||||||
|
badmanifest_test() ->
|
||||||
RootPath = "../test/ledger",
|
RootPath = "../test/ledger",
|
||||||
clean_testdir(RootPath),
|
clean_testdir(RootPath),
|
||||||
{ok, PCL} = pcl_start(#penciller_options{root_path=RootPath,
|
{ok, PCL} = pcl_start(#penciller_options{root_path=RootPath,
|
||||||
max_inmemory_tablesize=1000}),
|
max_inmemory_tablesize=1000}),
|
||||||
Key1 = {{o,"Bucket0001", "Key0001", null}, {1001, {active, infinity}, null}},
|
Key1 = {{o,"Bucket0001", "Key0001", null},
|
||||||
|
{1001, {active, infinity}, null}},
|
||||||
KL1 = leveled_sft:generate_randomkeys({1000, 1}),
|
KL1 = leveled_sft:generate_randomkeys({1000, 1}),
|
||||||
|
|
||||||
ok = maybe_pause_push(PCL, KL1 ++ [Key1]),
|
ok = maybe_pause_push(PCL, KL1 ++ [Key1]),
|
||||||
|
@ -1589,17 +1681,18 @@ coverage_test() ->
|
||||||
%% call to the penciller and the second fetch of the cache entry
|
%% call to the penciller and the second fetch of the cache entry
|
||||||
?assertMatch(Key1, pcl_fetch(PCL, {o,"Bucket0001", "Key0001", null})),
|
?assertMatch(Key1, pcl_fetch(PCL, {o,"Bucket0001", "Key0001", null})),
|
||||||
|
|
||||||
|
timer:sleep(100), % Avoids confusion if L0 file not written before close
|
||||||
ok = pcl_close(PCL),
|
ok = pcl_close(PCL),
|
||||||
|
|
||||||
ManifestFP = filepath(RootPath, manifest),
|
ManifestFP = filepath(RootPath, manifest),
|
||||||
ok = file:write_file(filename:join(ManifestFP, "yeszero_123.man"), term_to_binary("hello")),
|
ok = file:write_file(filename:join(ManifestFP, "yeszero_123.man"),
|
||||||
|
term_to_binary("hello")),
|
||||||
{ok, PCLr} = pcl_start(#penciller_options{root_path=RootPath,
|
{ok, PCLr} = pcl_start(#penciller_options{root_path=RootPath,
|
||||||
max_inmemory_tablesize=1000}),
|
max_inmemory_tablesize=1000}),
|
||||||
?assertMatch(Key1, pcl_fetch(PCLr, {o,"Bucket0001", "Key0001", null})),
|
?assertMatch(Key1, pcl_fetch(PCLr, {o,"Bucket0001", "Key0001", null})),
|
||||||
ok = pcl_close(PCLr),
|
ok = pcl_close(PCLr),
|
||||||
clean_testdir(RootPath).
|
clean_testdir(RootPath).
|
||||||
|
|
||||||
|
|
||||||
checkready(Pid) ->
|
checkready(Pid) ->
|
||||||
try
|
try
|
||||||
leveled_sft:sft_checkready(Pid)
|
leveled_sft:sft_checkready(Pid)
|
||||||
|
@ -1608,5 +1701,8 @@ checkready(Pid) ->
|
||||||
timeout
|
timeout
|
||||||
end.
|
end.
|
||||||
|
|
||||||
|
coverage_cheat_test() ->
|
||||||
|
{noreply, _State0} = handle_info(timeout, #state{}),
|
||||||
|
{ok, _State1} = code_change(null, #state{}, null).
|
||||||
|
|
||||||
-endif.
|
-endif.
|
||||||
|
|
|
@ -142,16 +142,22 @@
|
||||||
|
|
||||||
-module(leveled_sft).
|
-module(leveled_sft).
|
||||||
|
|
||||||
-behaviour(gen_server).
|
-behaviour(gen_fsm).
|
||||||
-include("include/leveled.hrl").
|
-include("include/leveled.hrl").
|
||||||
|
|
||||||
-export([init/1,
|
-export([init/1,
|
||||||
handle_call/3,
|
handle_sync_event/4,
|
||||||
handle_cast/2,
|
handle_event/3,
|
||||||
handle_info/2,
|
handle_info/3,
|
||||||
terminate/2,
|
terminate/3,
|
||||||
code_change/3,
|
code_change/4,
|
||||||
sft_new/4,
|
starting/2,
|
||||||
|
starting/3,
|
||||||
|
reader/3,
|
||||||
|
delete_pending/3,
|
||||||
|
delete_pending/2]).
|
||||||
|
|
||||||
|
-export([sft_new/4,
|
||||||
sft_newfroml0cache/4,
|
sft_newfroml0cache/4,
|
||||||
sft_open/1,
|
sft_open/1,
|
||||||
sft_get/2,
|
sft_get/2,
|
||||||
|
@ -161,8 +167,9 @@
|
||||||
sft_checkready/1,
|
sft_checkready/1,
|
||||||
sft_setfordelete/2,
|
sft_setfordelete/2,
|
||||||
sft_deleteconfirmed/1,
|
sft_deleteconfirmed/1,
|
||||||
sft_getmaxsequencenumber/1,
|
sft_getmaxsequencenumber/1]).
|
||||||
generate_randomkeys/1]).
|
|
||||||
|
-export([generate_randomkeys/1]).
|
||||||
|
|
||||||
-include_lib("eunit/include/eunit.hrl").
|
-include_lib("eunit/include/eunit.hrl").
|
||||||
|
|
||||||
|
@ -202,7 +209,6 @@
|
||||||
handle :: file:fd(),
|
handle :: file:fd(),
|
||||||
background_complete = false :: boolean(),
|
background_complete = false :: boolean(),
|
||||||
oversized_file = false :: boolean(),
|
oversized_file = false :: boolean(),
|
||||||
ready_for_delete = false ::boolean(),
|
|
||||||
penciller :: pid()}).
|
penciller :: pid()}).
|
||||||
|
|
||||||
|
|
||||||
|
@ -221,65 +227,68 @@ sft_new(Filename, KL1, KL2, LevelInfo) ->
|
||||||
LevelInfo
|
LevelInfo
|
||||||
end
|
end
|
||||||
end,
|
end,
|
||||||
{ok, Pid} = gen_server:start(?MODULE, [], []),
|
{ok, Pid} = gen_fsm:start(?MODULE, [], []),
|
||||||
Reply = gen_server:call(Pid,
|
Reply = gen_fsm:sync_send_event(Pid,
|
||||||
{sft_new, Filename, KL1, KL2, LevelR},
|
{sft_new, Filename, KL1, KL2, LevelR},
|
||||||
infinity),
|
infinity),
|
||||||
{ok, Pid, Reply}.
|
{ok, Pid, Reply}.
|
||||||
|
|
||||||
sft_newfroml0cache(Filename, Slots, FetchFun, Options) ->
|
sft_newfroml0cache(Filename, Slots, FetchFun, Options) ->
|
||||||
{ok, Pid} = gen_server:start(?MODULE, [], []),
|
{ok, Pid} = gen_fsm:start(?MODULE, [], []),
|
||||||
case Options#sft_options.wait of
|
case Options#sft_options.wait of
|
||||||
true ->
|
true ->
|
||||||
KL1 = leveled_pmem:to_list(Slots, FetchFun),
|
KL1 = leveled_pmem:to_list(Slots, FetchFun),
|
||||||
Reply = gen_server:call(Pid,
|
Reply = gen_fsm:sync_send_event(Pid,
|
||||||
{sft_new,
|
{sft_new,
|
||||||
Filename,
|
Filename,
|
||||||
KL1,
|
KL1,
|
||||||
[],
|
[],
|
||||||
#level{level=0}},
|
#level{level=0}},
|
||||||
infinity),
|
infinity),
|
||||||
{ok, Pid, Reply};
|
{ok, Pid, Reply};
|
||||||
false ->
|
false ->
|
||||||
gen_server:cast(Pid,
|
gen_fsm:send_event(Pid,
|
||||||
{sft_newfroml0cache,
|
{sft_newfroml0cache,
|
||||||
Filename,
|
Filename,
|
||||||
Slots,
|
Slots,
|
||||||
FetchFun,
|
FetchFun,
|
||||||
Options#sft_options.penciller}),
|
Options#sft_options.penciller}),
|
||||||
{ok, Pid, noreply}
|
{ok, Pid, noreply}
|
||||||
end.
|
end.
|
||||||
|
|
||||||
sft_open(Filename) ->
|
sft_open(Filename) ->
|
||||||
{ok, Pid} = gen_server:start(?MODULE, [], []),
|
{ok, Pid} = gen_fsm:start(?MODULE, [], []),
|
||||||
case gen_server:call(Pid, {sft_open, Filename}, infinity) of
|
case gen_fsm:sync_send_event(Pid, {sft_open, Filename}, infinity) of
|
||||||
{ok, {SK, EK}} ->
|
{ok, {SK, EK}} ->
|
||||||
{ok, Pid, {SK, EK}}
|
{ok, Pid, {SK, EK}}
|
||||||
end.
|
end.
|
||||||
|
|
||||||
sft_setfordelete(Pid, Penciller) ->
|
sft_setfordelete(Pid, Penciller) ->
|
||||||
gen_server:call(Pid, {set_for_delete, Penciller}, infinity).
|
gen_fsm:sync_send_event(Pid, {set_for_delete, Penciller}, infinity).
|
||||||
|
|
||||||
sft_get(Pid, Key) ->
|
sft_get(Pid, Key) ->
|
||||||
gen_server:call(Pid, {get_kv, Key}, infinity).
|
gen_fsm:sync_send_event(Pid, {get_kv, Key}, infinity).
|
||||||
|
|
||||||
sft_getkvrange(Pid, StartKey, EndKey, ScanWidth) ->
|
sft_getkvrange(Pid, StartKey, EndKey, ScanWidth) ->
|
||||||
gen_server:call(Pid, {get_kvrange, StartKey, EndKey, ScanWidth}, infinity).
|
gen_fsm:sync_send_event(Pid,
|
||||||
|
{get_kvrange, StartKey, EndKey, ScanWidth},
|
||||||
|
infinity).
|
||||||
|
|
||||||
sft_clear(Pid) ->
|
sft_clear(Pid) ->
|
||||||
gen_server:call(Pid, clear, infinity).
|
gen_fsm:sync_send_event(Pid, {set_for_delete, false}, infinity),
|
||||||
|
gen_fsm:sync_send_event(Pid, close, 1000).
|
||||||
|
|
||||||
sft_close(Pid) ->
|
sft_close(Pid) ->
|
||||||
gen_server:call(Pid, close, 1000).
|
gen_fsm:sync_send_event(Pid, close, 1000).
|
||||||
|
|
||||||
sft_deleteconfirmed(Pid) ->
|
sft_deleteconfirmed(Pid) ->
|
||||||
gen_server:cast(Pid, close).
|
gen_fsm:send_event(Pid, close).
|
||||||
|
|
||||||
sft_checkready(Pid) ->
|
sft_checkready(Pid) ->
|
||||||
gen_server:call(Pid, background_complete, 20).
|
gen_fsm:sync_send_event(Pid, background_complete, 20).
|
||||||
|
|
||||||
sft_getmaxsequencenumber(Pid) ->
|
sft_getmaxsequencenumber(Pid) ->
|
||||||
gen_server:call(Pid, get_maxsqn, infinity).
|
gen_fsm:sync_send_event(Pid, get_maxsqn, infinity).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -288,52 +297,75 @@ sft_getmaxsequencenumber(Pid) ->
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
|
|
||||||
init([]) ->
|
init([]) ->
|
||||||
{ok, #state{}}.
|
{ok, starting, #state{}}.
|
||||||
|
|
||||||
handle_call({sft_new, Filename, KL1, [], _LevelR=#level{level=L}},
|
starting({sft_new, Filename, KL1, [], _LevelR=#level{level=L}}, _From, _State)
|
||||||
_From,
|
when L == 0 ->
|
||||||
_State) when L == 0 ->
|
|
||||||
{ok, State} = create_levelzero(KL1, Filename),
|
{ok, State} = create_levelzero(KL1, Filename),
|
||||||
{reply,
|
{reply,
|
||||||
{{[], []},
|
{{[], []}, State#state.smallest_key, State#state.highest_key},
|
||||||
State#state.smallest_key,
|
reader,
|
||||||
State#state.highest_key},
|
|
||||||
State};
|
State};
|
||||||
handle_call({sft_new, Filename, KL1, KL2, LevelR}, _From, _State) ->
|
starting({sft_new, Filename, KL1, KL2, LevelR}, _From, _State) ->
|
||||||
case create_file(Filename) of
|
case create_file(Filename) of
|
||||||
{Handle, FileMD} ->
|
{Handle, FileMD} ->
|
||||||
{ReadHandle, UpdFileMD, KeyRemainders} = complete_file(Handle,
|
{ReadHandle, UpdFileMD, KeyRemainders} = complete_file(Handle,
|
||||||
FileMD,
|
FileMD,
|
||||||
KL1, KL2,
|
KL1, KL2,
|
||||||
LevelR),
|
LevelR),
|
||||||
{reply, {KeyRemainders,
|
{reply,
|
||||||
UpdFileMD#state.smallest_key,
|
{KeyRemainders,
|
||||||
UpdFileMD#state.highest_key},
|
UpdFileMD#state.smallest_key,
|
||||||
UpdFileMD#state{handle=ReadHandle, filename=Filename}}
|
UpdFileMD#state.highest_key},
|
||||||
|
reader,
|
||||||
|
UpdFileMD#state{handle=ReadHandle, filename=Filename}}
|
||||||
end;
|
end;
|
||||||
handle_call({sft_open, Filename}, _From, _State) ->
|
starting({sft_open, Filename}, _From, _State) ->
|
||||||
{_Handle, FileMD} = open_file(#state{filename=Filename}),
|
{_Handle, FileMD} = open_file(#state{filename=Filename}),
|
||||||
leveled_log:log("SFT01", [Filename]),
|
leveled_log:log("SFT01", [Filename]),
|
||||||
{reply,
|
{reply,
|
||||||
{ok,
|
{ok, {FileMD#state.smallest_key, FileMD#state.highest_key}},
|
||||||
{FileMD#state.smallest_key, FileMD#state.highest_key}},
|
reader,
|
||||||
FileMD};
|
FileMD}.
|
||||||
handle_call({get_kv, Key}, _From, State) ->
|
|
||||||
|
starting({sft_newfroml0cache, Filename, Slots, FetchFun, PCL}, _State) ->
|
||||||
|
SW = os:timestamp(),
|
||||||
|
Inp1 = leveled_pmem:to_list(Slots, FetchFun),
|
||||||
|
{ok, State} = create_levelzero(Inp1, Filename),
|
||||||
|
leveled_log:log_timer("SFT03", [Filename], SW),
|
||||||
|
case PCL of
|
||||||
|
undefined ->
|
||||||
|
{next_state, reader, State};
|
||||||
|
_ ->
|
||||||
|
leveled_penciller:pcl_confirml0complete(PCL,
|
||||||
|
State#state.filename,
|
||||||
|
State#state.smallest_key,
|
||||||
|
State#state.highest_key),
|
||||||
|
{next_state, reader, State}
|
||||||
|
end.
|
||||||
|
|
||||||
|
|
||||||
|
reader({get_kv, Key}, _From, State) ->
|
||||||
Reply = fetch_keyvalue(State#state.handle, State, Key),
|
Reply = fetch_keyvalue(State#state.handle, State, Key),
|
||||||
statecheck_onreply(Reply, State);
|
{reply, Reply, reader, State};
|
||||||
handle_call({get_kvrange, StartKey, EndKey, ScanWidth}, _From, State) ->
|
reader({get_kvrange, StartKey, EndKey, ScanWidth}, _From, State) ->
|
||||||
Reply = pointer_append_queryresults(fetch_range_kv(State#state.handle,
|
Reply = pointer_append_queryresults(fetch_range_kv(State#state.handle,
|
||||||
State,
|
State,
|
||||||
StartKey,
|
StartKey,
|
||||||
EndKey,
|
EndKey,
|
||||||
ScanWidth),
|
ScanWidth),
|
||||||
self()),
|
self()),
|
||||||
statecheck_onreply(Reply, State);
|
{reply, Reply, reader, State};
|
||||||
handle_call(close, _From, State) ->
|
reader(get_maxsqn, _From, State) ->
|
||||||
{stop, normal, ok, State};
|
{reply, State#state.highest_sqn, reader, State};
|
||||||
handle_call(clear, _From, State) ->
|
reader({set_for_delete, Penciller}, _From, State) ->
|
||||||
{stop, normal, ok, State#state{ready_for_delete=true}};
|
leveled_log:log("SFT02", [State#state.filename]),
|
||||||
handle_call(background_complete, _From, State) ->
|
{reply,
|
||||||
|
ok,
|
||||||
|
delete_pending,
|
||||||
|
State#state{penciller=Penciller},
|
||||||
|
?DELETE_TIMEOUT};
|
||||||
|
reader(background_complete, _From, State) ->
|
||||||
if
|
if
|
||||||
State#state.background_complete == true ->
|
State#state.background_complete == true ->
|
||||||
{reply,
|
{reply,
|
||||||
|
@ -341,67 +373,57 @@ handle_call(background_complete, _From, State) ->
|
||||||
State#state.filename,
|
State#state.filename,
|
||||||
State#state.smallest_key,
|
State#state.smallest_key,
|
||||||
State#state.highest_key},
|
State#state.highest_key},
|
||||||
|
reader,
|
||||||
State}
|
State}
|
||||||
end;
|
end;
|
||||||
handle_call({set_for_delete, Penciller}, _From, State) ->
|
reader(close, _From, State) ->
|
||||||
leveled_log:log("SFT02", [State#state.filename]),
|
ok = file:close(State#state.handle),
|
||||||
{reply,
|
{stop, normal, ok, State}.
|
||||||
ok,
|
|
||||||
State#state{ready_for_delete=true,
|
|
||||||
penciller=Penciller},
|
|
||||||
?DELETE_TIMEOUT};
|
|
||||||
handle_call(get_maxsqn, _From, State) ->
|
|
||||||
statecheck_onreply(State#state.highest_sqn, State).
|
|
||||||
|
|
||||||
handle_cast({sft_newfroml0cache, Filename, Slots, FetchFun, PCL}, _State) ->
|
delete_pending({get_kv, Key}, _From, State) ->
|
||||||
SW = os:timestamp(),
|
Reply = fetch_keyvalue(State#state.handle, State, Key),
|
||||||
Inp1 = leveled_pmem:to_list(Slots, FetchFun),
|
{reply, Reply, delete_pending, State, ?DELETE_TIMEOUT};
|
||||||
{ok, State} = create_levelzero(Inp1, Filename),
|
delete_pending({get_kvrange, StartKey, EndKey, ScanWidth}, _From, State) ->
|
||||||
leveled_log:log_timer("SFT03", [Filename], SW),
|
Reply = pointer_append_queryresults(fetch_range_kv(State#state.handle,
|
||||||
case PCL of
|
State,
|
||||||
undefined ->
|
StartKey,
|
||||||
{noreply, State};
|
EndKey,
|
||||||
_ ->
|
ScanWidth),
|
||||||
leveled_penciller:pcl_confirml0complete(PCL,
|
self()),
|
||||||
State#state.filename,
|
{reply, Reply, delete_pending, State, ?DELETE_TIMEOUT};
|
||||||
State#state.smallest_key,
|
delete_pending(close, _From, State) ->
|
||||||
State#state.highest_key),
|
leveled_log:log("SFT06", [State#state.filename]),
|
||||||
{noreply, State}
|
ok = file:close(State#state.handle),
|
||||||
end;
|
ok = file:delete(State#state.filename),
|
||||||
handle_cast(close, State) ->
|
{stop, normal, ok, State}.
|
||||||
|
|
||||||
|
delete_pending(timeout, State) ->
|
||||||
|
leveled_log:log("SFT05", [timeout, State#state.filename]),
|
||||||
|
ok = leveled_penciller:pcl_confirmdelete(State#state.penciller,
|
||||||
|
State#state.filename),
|
||||||
|
{next_state, delete_pending, State, ?DELETE_TIMEOUT};
|
||||||
|
delete_pending(close, State) ->
|
||||||
|
leveled_log:log("SFT06", [State#state.filename]),
|
||||||
|
ok = file:close(State#state.handle),
|
||||||
|
ok = file:delete(State#state.filename),
|
||||||
{stop, normal, State}.
|
{stop, normal, State}.
|
||||||
|
|
||||||
handle_info(timeout, State) ->
|
handle_sync_event(_Msg, _From, StateName, State) ->
|
||||||
if
|
{reply, undefined, StateName, State}.
|
||||||
State#state.ready_for_delete == true ->
|
|
||||||
leveled_log:log("SFT05", [timeout, State#state.filename]),
|
|
||||||
ok = leveled_penciller:pcl_confirmdelete(State#state.penciller,
|
|
||||||
State#state.filename),
|
|
||||||
{noreply, State, ?DELETE_TIMEOUT}
|
|
||||||
end.
|
|
||||||
|
|
||||||
terminate(Reason, State) ->
|
handle_event(_Msg, StateName, State) ->
|
||||||
leveled_log:log("SFT05", [Reason, State#state.filename]),
|
{next_state, StateName, State}.
|
||||||
case State#state.ready_for_delete of
|
|
||||||
true ->
|
|
||||||
leveled_log:log("SFT06", [State#state.filename]),
|
|
||||||
ok = file:close(State#state.handle),
|
|
||||||
ok = file:delete(State#state.filename);
|
|
||||||
_ ->
|
|
||||||
ok = file:close(State#state.handle)
|
|
||||||
end.
|
|
||||||
|
|
||||||
code_change(_OldVsn, State, _Extra) ->
|
handle_info(_Msg, StateName, State) ->
|
||||||
{ok, State}.
|
{next_state, StateName, State}.
|
||||||
|
|
||||||
|
terminate(Reason, _StateName, State) ->
|
||||||
|
leveled_log:log("SFT05", [Reason, State#state.filename]).
|
||||||
|
|
||||||
|
code_change(_OldVsn, StateName, State, _Extra) ->
|
||||||
|
{ok, StateName, State}.
|
||||||
|
|
||||||
|
|
||||||
statecheck_onreply(Reply, State) ->
|
|
||||||
case State#state.ready_for_delete of
|
|
||||||
true ->
|
|
||||||
{reply, Reply, State, ?DELETE_TIMEOUT};
|
|
||||||
false ->
|
|
||||||
{reply, Reply, State}
|
|
||||||
end.
|
|
||||||
|
|
||||||
%%%============================================================================
|
%%%============================================================================
|
||||||
%%% Internal functions
|
%%% Internal functions
|
||||||
|
@ -700,13 +722,8 @@ fetch_block(Handle, LengthList, BlockNmb, StartOfSlot) ->
|
||||||
binary_to_term(BlockToCheckBin).
|
binary_to_term(BlockToCheckBin).
|
||||||
|
|
||||||
%% Need to deal with either Key or {next, Key}
|
%% Need to deal with either Key or {next, Key}
|
||||||
get_nearestkey(KVList, all) ->
|
get_nearestkey([H|_Tail], all) ->
|
||||||
case KVList of
|
H;
|
||||||
[] ->
|
|
||||||
not_found;
|
|
||||||
[H|_Tail] ->
|
|
||||||
H
|
|
||||||
end;
|
|
||||||
get_nearestkey(KVList, Key) ->
|
get_nearestkey(KVList, Key) ->
|
||||||
case Key of
|
case Key of
|
||||||
{next, K} ->
|
{next, K} ->
|
||||||
|
@ -797,8 +814,14 @@ write_keys(Handle,
|
||||||
[{LowKey_Slot, SegFilter, LengthList}]),
|
[{LowKey_Slot, SegFilter, LengthList}]),
|
||||||
UpdSlots = <<SerialisedSlots/binary, SerialisedSlot/binary>>,
|
UpdSlots = <<SerialisedSlots/binary, SerialisedSlot/binary>>,
|
||||||
SNExtremes = {min(LSN_Slot, LSN), max(HSN_Slot, HSN)},
|
SNExtremes = {min(LSN_Slot, LSN), max(HSN_Slot, HSN)},
|
||||||
FinalKey = case LastKey_Slot of null -> LastKey; _ -> LastKey_Slot end,
|
FinalKey = case LastKey_Slot of
|
||||||
FirstKey = case LowKey of null -> LowKey_Slot; _ -> LowKey end,
|
null -> LastKey;
|
||||||
|
_ -> LastKey_Slot
|
||||||
|
end,
|
||||||
|
FirstKey = case LowKey of
|
||||||
|
null -> LowKey_Slot;
|
||||||
|
_ -> LowKey
|
||||||
|
end,
|
||||||
case Status of
|
case Status of
|
||||||
partial ->
|
partial ->
|
||||||
UpdHandle = WriteFun(slots , {Handle, UpdSlots}),
|
UpdHandle = WriteFun(slots , {Handle, UpdSlots}),
|
||||||
|
@ -1003,16 +1026,16 @@ create_slot(KL1, KL2, LevelR, BlockCount, SegLists, SerialisedSlot, LengthList,
|
||||||
{null, LSN, HSN, LastKey, Status};
|
{null, LSN, HSN, LastKey, Status};
|
||||||
{null, _} ->
|
{null, _} ->
|
||||||
[NewLowKeyV|_] = BlockKeyList,
|
[NewLowKeyV|_] = BlockKeyList,
|
||||||
|
NewLastKey = lists:last([{keyonly, LastKey}|BlockKeyList]),
|
||||||
{leveled_codec:strip_to_keyonly(NewLowKeyV),
|
{leveled_codec:strip_to_keyonly(NewLowKeyV),
|
||||||
min(LSN, LSNb), max(HSN, HSNb),
|
min(LSN, LSNb), max(HSN, HSNb),
|
||||||
leveled_codec:strip_to_keyonly(last(BlockKeyList,
|
leveled_codec:strip_to_keyonly(NewLastKey),
|
||||||
{last, LastKey})),
|
|
||||||
Status};
|
Status};
|
||||||
{_, _} ->
|
{_, _} ->
|
||||||
|
NewLastKey = lists:last([{keyonly, LastKey}|BlockKeyList]),
|
||||||
{LowKey,
|
{LowKey,
|
||||||
min(LSN, LSNb), max(HSN, HSNb),
|
min(LSN, LSNb), max(HSN, HSNb),
|
||||||
leveled_codec:strip_to_keyonly(last(BlockKeyList,
|
leveled_codec:strip_to_keyonly(NewLastKey),
|
||||||
{last, LastKey})),
|
|
||||||
Status}
|
Status}
|
||||||
end,
|
end,
|
||||||
SerialisedBlock = serialise_block(BlockKeyList),
|
SerialisedBlock = serialise_block(BlockKeyList),
|
||||||
|
@ -1022,13 +1045,6 @@ create_slot(KL1, KL2, LevelR, BlockCount, SegLists, SerialisedSlot, LengthList,
|
||||||
SerialisedSlot2, LengthList ++ [BlockLength],
|
SerialisedSlot2, LengthList ++ [BlockLength],
|
||||||
TrackingMetadata).
|
TrackingMetadata).
|
||||||
|
|
||||||
|
|
||||||
last([], {last, LastKey}) -> {keyonly, LastKey};
|
|
||||||
last([E|Es], PrevLast) -> last(E, Es, PrevLast).
|
|
||||||
|
|
||||||
last(_, [E|Es], PrevLast) -> last(E, Es, PrevLast);
|
|
||||||
last(E, [], _) -> E.
|
|
||||||
|
|
||||||
serialise_block(BlockKeyList) ->
|
serialise_block(BlockKeyList) ->
|
||||||
term_to_binary(BlockKeyList, [{compressed, ?COMPRESSION_LEVEL}]).
|
term_to_binary(BlockKeyList, [{compressed, ?COMPRESSION_LEVEL}]).
|
||||||
|
|
||||||
|
@ -1757,20 +1773,12 @@ big_create_file_test() ->
|
||||||
?assertMatch(Result1, {K1, {Sq1, St1, V1}}),
|
?assertMatch(Result1, {K1, {Sq1, St1, V1}}),
|
||||||
?assertMatch(Result2, {K2, {Sq2, St2, V2}}),
|
?assertMatch(Result2, {K2, {Sq2, St2, V2}}),
|
||||||
SubList = lists:sublist(KL2, 1000),
|
SubList = lists:sublist(KL2, 1000),
|
||||||
FailedFinds = lists:foldl(fun(K, Acc) ->
|
lists:foreach(fun(K) ->
|
||||||
{Kn, {_, _, _}} = K,
|
{Kn, {_, _, _}} = K,
|
||||||
Rn = fetch_keyvalue(Handle, FileMD, Kn),
|
Rn = fetch_keyvalue(Handle, FileMD, Kn),
|
||||||
case Rn of
|
?assertMatch({Kn, {_, _, _}}, Rn)
|
||||||
{Kn, {_, _, _}} ->
|
end,
|
||||||
Acc;
|
SubList),
|
||||||
_ ->
|
|
||||||
Acc + 1
|
|
||||||
end
|
|
||||||
end,
|
|
||||||
0,
|
|
||||||
SubList),
|
|
||||||
io:format("FailedFinds of ~w~n", [FailedFinds]),
|
|
||||||
?assertMatch(FailedFinds, 0),
|
|
||||||
Result3 = fetch_keyvalue(Handle,
|
Result3 = fetch_keyvalue(Handle,
|
||||||
FileMD,
|
FileMD,
|
||||||
{o, "Bucket1024", "Key1024Alt", null}),
|
{o, "Bucket1024", "Key1024Alt", null}),
|
||||||
|
@ -1875,6 +1883,41 @@ key_dominates_test() ->
|
||||||
key_dominates([KV7|KL2], [KV2], {true, 1})).
|
key_dominates([KV7|KL2], [KV2], {true, 1})).
|
||||||
|
|
||||||
|
|
||||||
|
corrupted_sft_test() ->
|
||||||
|
Filename = "../test/bigcorrupttest1.sft",
|
||||||
|
{KL1, KL2} = {lists:ukeysort(1, generate_randomkeys(2000)), []},
|
||||||
|
{InitHandle, InitFileMD} = create_file(Filename),
|
||||||
|
{Handle, _FileMD, _Rems} = complete_file(InitHandle,
|
||||||
|
InitFileMD,
|
||||||
|
KL1, KL2,
|
||||||
|
#level{level=1}),
|
||||||
|
{ok, Lengths} = file:pread(Handle, 12, 12),
|
||||||
|
<<BlocksLength:32/integer,
|
||||||
|
IndexLength:32/integer,
|
||||||
|
FilterLength:32/integer>> = Lengths,
|
||||||
|
ok = file:close(Handle),
|
||||||
|
|
||||||
|
{ok, Corrupter} = file:open(Filename , [binary, raw, read, write]),
|
||||||
|
lists:foreach(fun(X) ->
|
||||||
|
case X * 5 of
|
||||||
|
Y when Y < FilterLength ->
|
||||||
|
Position = ?HEADER_LEN + X * 5
|
||||||
|
+ BlocksLength + IndexLength,
|
||||||
|
file:pwrite(Corrupter,
|
||||||
|
Position,
|
||||||
|
<<0:8/integer>>)
|
||||||
|
end
|
||||||
|
end,
|
||||||
|
lists:seq(1, 100)),
|
||||||
|
ok = file:close(Corrupter),
|
||||||
|
|
||||||
|
{ok, SFTr, _KeyExtremes} = sft_open(Filename),
|
||||||
|
lists:foreach(fun({K, V}) ->
|
||||||
|
?assertMatch({K, V}, sft_get(SFTr, K))
|
||||||
|
end,
|
||||||
|
KL1),
|
||||||
|
ok = sft_clear(SFTr).
|
||||||
|
|
||||||
big_iterator_test() ->
|
big_iterator_test() ->
|
||||||
Filename = "../test/bigtest1.sft",
|
Filename = "../test/bigtest1.sft",
|
||||||
{KL1, KL2} = {lists:sort(generate_randomkeys(10000)), []},
|
{KL1, KL2} = {lists:sort(generate_randomkeys(10000)), []},
|
||||||
|
@ -1882,30 +1925,70 @@ big_iterator_test() ->
|
||||||
{Handle, FileMD, {KL1Rem, KL2Rem}} = complete_file(InitHandle, InitFileMD,
|
{Handle, FileMD, {KL1Rem, KL2Rem}} = complete_file(InitHandle, InitFileMD,
|
||||||
KL1, KL2,
|
KL1, KL2,
|
||||||
#level{level=1}),
|
#level{level=1}),
|
||||||
io:format("Remainder lengths are ~w and ~w ~n", [length(KL1Rem), length(KL2Rem)]),
|
io:format("Remainder lengths are ~w and ~w ~n", [length(KL1Rem),
|
||||||
{complete, Result1} = fetch_range_keysonly(Handle,
|
length(KL2Rem)]),
|
||||||
FileMD,
|
{complete,
|
||||||
{o, "Bucket0000", "Key0000", null},
|
Result1} = fetch_range_keysonly(Handle,
|
||||||
{o, "Bucket9999", "Key9999", null},
|
FileMD,
|
||||||
256),
|
{o, "Bucket0000", "Key0000", null},
|
||||||
|
{o, "Bucket9999", "Key9999", null},
|
||||||
|
256),
|
||||||
NumFoundKeys1 = length(Result1),
|
NumFoundKeys1 = length(Result1),
|
||||||
NumAddedKeys = 10000 - length(KL1Rem),
|
NumAddedKeys = 10000 - length(KL1Rem),
|
||||||
?assertMatch(NumFoundKeys1, NumAddedKeys),
|
?assertMatch(NumFoundKeys1, NumAddedKeys),
|
||||||
{partial, Result2, _} = fetch_range_keysonly(Handle,
|
{partial,
|
||||||
FileMD,
|
Result2,
|
||||||
{o, "Bucket0000", "Key0000", null},
|
_} = fetch_range_keysonly(Handle,
|
||||||
{o, "Bucket9999", "Key9999", null},
|
FileMD,
|
||||||
32),
|
{o, "Bucket0000", "Key0000", null},
|
||||||
|
{o, "Bucket9999", "Key9999", null},
|
||||||
|
32),
|
||||||
?assertMatch(32 * 128, length(Result2)),
|
?assertMatch(32 * 128, length(Result2)),
|
||||||
{partial, Result3, _} = fetch_range_keysonly(Handle,
|
{partial,
|
||||||
FileMD,
|
Result3,
|
||||||
{o, "Bucket0000", "Key0000", null},
|
_} = fetch_range_keysonly(Handle,
|
||||||
{o, "Bucket9999", "Key9999", null},
|
FileMD,
|
||||||
4),
|
{o, "Bucket0000", "Key0000", null},
|
||||||
|
{o, "Bucket9999", "Key9999", null},
|
||||||
|
4),
|
||||||
?assertMatch(4 * 128, length(Result3)),
|
?assertMatch(4 * 128, length(Result3)),
|
||||||
ok = file:close(Handle),
|
ok = file:close(Handle),
|
||||||
ok = file:delete(Filename).
|
ok = file:delete(Filename).
|
||||||
|
|
||||||
|
hashclash_test() ->
|
||||||
|
Filename = "../test/hashclash.sft",
|
||||||
|
Key1 = {o, "Bucket", "Key838068", null},
|
||||||
|
Key99 = {o, "Bucket", "Key898982", null},
|
||||||
|
KeyNF = {o, "Bucket", "Key539122", null},
|
||||||
|
?assertMatch(4, hash_for_segmentid({keyonly, Key1})),
|
||||||
|
?assertMatch(4, hash_for_segmentid({keyonly, Key99})),
|
||||||
|
?assertMatch(4, hash_for_segmentid({keyonly, KeyNF})),
|
||||||
|
KeyList = lists:foldl(fun(X, Acc) ->
|
||||||
|
Key = {o,
|
||||||
|
"Bucket",
|
||||||
|
"Key8400" ++ integer_to_list(X),
|
||||||
|
null},
|
||||||
|
Value = {X, {active, infinity}, null},
|
||||||
|
Acc ++ [{Key, Value}] end,
|
||||||
|
[],
|
||||||
|
lists:seq(10,98)),
|
||||||
|
KeyListToUse = [{Key1, {1, {active, infinity}, null}}|KeyList]
|
||||||
|
++ [{Key99, {99, {active, infinity}, null}}],
|
||||||
|
{InitHandle, InitFileMD} = create_file(Filename),
|
||||||
|
{Handle, _FileMD, _Rem} = complete_file(InitHandle, InitFileMD,
|
||||||
|
KeyListToUse, [],
|
||||||
|
#level{level=1}),
|
||||||
|
ok = file:close(Handle),
|
||||||
|
{ok, SFTr, _KeyExtremes} = sft_open(Filename),
|
||||||
|
?assertMatch({Key1, {1, {active, infinity}, null}},
|
||||||
|
sft_get(SFTr, Key1)),
|
||||||
|
?assertMatch({Key99, {99, {active, infinity}, null}},
|
||||||
|
sft_get(SFTr, Key99)),
|
||||||
|
?assertMatch(not_present,
|
||||||
|
sft_get(SFTr, KeyNF)),
|
||||||
|
|
||||||
|
ok = sft_clear(SFTr).
|
||||||
|
|
||||||
filename_test() ->
|
filename_test() ->
|
||||||
FN1 = "../tmp/filename",
|
FN1 = "../tmp/filename",
|
||||||
FN2 = "../tmp/filename.pnd",
|
FN2 = "../tmp/filename.pnd",
|
||||||
|
@ -1918,4 +2001,23 @@ filename_test() ->
|
||||||
"../tmp/subdir/file_name.sft"},
|
"../tmp/subdir/file_name.sft"},
|
||||||
generate_filenames(FN3)).
|
generate_filenames(FN3)).
|
||||||
|
|
||||||
|
empty_file_test() ->
|
||||||
|
{ok, Pid, _Reply} = sft_new("../test/emptyfile.pnd", [], [], 1),
|
||||||
|
?assertMatch(not_present, sft_get(Pid, "Key1")),
|
||||||
|
?assertMatch([], sft_getkvrange(Pid, all, all, 16)),
|
||||||
|
ok = sft_clear(Pid).
|
||||||
|
|
||||||
|
|
||||||
|
nonsense_coverage_test() ->
|
||||||
|
{ok, Pid} = gen_fsm:start(?MODULE, [], []),
|
||||||
|
undefined = gen_fsm:sync_send_all_state_event(Pid, nonsense),
|
||||||
|
ok = gen_fsm:send_all_state_event(Pid, nonsense),
|
||||||
|
?assertMatch({next_state, reader, #state{}}, handle_info(nonsense,
|
||||||
|
reader,
|
||||||
|
#state{})),
|
||||||
|
?assertMatch({ok, reader, #state{}}, code_change(nonsense,
|
||||||
|
reader,
|
||||||
|
#state{},
|
||||||
|
nonsense)).
|
||||||
|
|
||||||
-endif.
|
-endif.
|
|
@ -62,8 +62,7 @@ simple_put_fetch_head_delete(_Config) ->
|
||||||
ok = leveled_bookie:book_close(Bookie3),
|
ok = leveled_bookie:book_close(Bookie3),
|
||||||
{ok, Bookie4} = leveled_bookie:book_start(StartOpts2),
|
{ok, Bookie4} = leveled_bookie:book_start(StartOpts2),
|
||||||
not_found = leveled_bookie:book_get(Bookie4, "Bucket1", "Key2"),
|
not_found = leveled_bookie:book_get(Bookie4, "Bucket1", "Key2"),
|
||||||
ok = leveled_bookie:book_close(Bookie4),
|
ok = leveled_bookie:book_destroy(Bookie4).
|
||||||
testutil:reset_filestructure().
|
|
||||||
|
|
||||||
many_put_fetch_head(_Config) ->
|
many_put_fetch_head(_Config) ->
|
||||||
RootPath = testutil:reset_filestructure(),
|
RootPath = testutil:reset_filestructure(),
|
||||||
|
@ -98,8 +97,7 @@ many_put_fetch_head(_Config) ->
|
||||||
{ok, Bookie3} = leveled_bookie:book_start(StartOpts2),
|
{ok, Bookie3} = leveled_bookie:book_start(StartOpts2),
|
||||||
testutil:check_forlist(Bookie3, ChkList2A),
|
testutil:check_forlist(Bookie3, ChkList2A),
|
||||||
testutil:check_forobject(Bookie3, TestObject),
|
testutil:check_forobject(Bookie3, TestObject),
|
||||||
ok = leveled_bookie:book_close(Bookie3),
|
ok = leveled_bookie:book_destroy(Bookie3).
|
||||||
testutil:reset_filestructure().
|
|
||||||
|
|
||||||
journal_compaction(_Config) ->
|
journal_compaction(_Config) ->
|
||||||
RootPath = testutil:reset_filestructure(),
|
RootPath = testutil:reset_filestructure(),
|
||||||
|
@ -144,30 +142,67 @@ journal_compaction(_Config) ->
|
||||||
%% Now replace all the other objects
|
%% Now replace all the other objects
|
||||||
ObjList2 = testutil:generate_objects(40000, 10002),
|
ObjList2 = testutil:generate_objects(40000, 10002),
|
||||||
testutil:riakload(Bookie1, ObjList2),
|
testutil:riakload(Bookie1, ObjList2),
|
||||||
|
|
||||||
ok = leveled_bookie:book_compactjournal(Bookie1, 30000),
|
ok = leveled_bookie:book_compactjournal(Bookie1, 30000),
|
||||||
|
|
||||||
F = fun leveled_bookie:book_islastcompactionpending/1,
|
testutil:wait_for_compaction(Bookie1),
|
||||||
lists:foldl(fun(X, Pending) ->
|
% Start snapshot - should not stop deletions
|
||||||
case Pending of
|
{ok,
|
||||||
false ->
|
{PclClone, _LdgCache},
|
||||||
false;
|
InkClone} = leveled_bookie:book_snapshotstore(Bookie1,
|
||||||
|
self(),
|
||||||
|
300000),
|
||||||
|
% Wait 2 seconds for files to be deleted
|
||||||
|
WasteFP = RootPath ++ "/journal/journal_files/waste",
|
||||||
|
lists:foldl(fun(X, Found) ->
|
||||||
|
case Found of
|
||||||
true ->
|
true ->
|
||||||
io:format("Loop ~w waiting for journal "
|
Found;
|
||||||
++ "compaction to complete~n", [X]),
|
false ->
|
||||||
timer:sleep(20000),
|
{ok, Files} = file:list_dir(WasteFP),
|
||||||
F(Bookie1)
|
if
|
||||||
end end,
|
length(Files) > 0 ->
|
||||||
true,
|
io:format("Deleted files found~n"),
|
||||||
lists:seq(1, 15)),
|
true;
|
||||||
|
length(Files) == 0 ->
|
||||||
|
timer:sleep(X),
|
||||||
|
false
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end,
|
||||||
|
false,
|
||||||
|
[2000,2000,2000,2000,2000,2000]),
|
||||||
|
{ok, ClearedJournals} = file:list_dir(WasteFP),
|
||||||
|
io:format("~w ClearedJournals found~n", [length(ClearedJournals)]),
|
||||||
|
true = length(ClearedJournals) > 0,
|
||||||
|
|
||||||
ChkList3 = lists:sublist(lists:sort(ObjList2), 500),
|
ChkList3 = lists:sublist(lists:sort(ObjList2), 500),
|
||||||
testutil:check_forlist(Bookie1, ChkList3),
|
testutil:check_forlist(Bookie1, ChkList3),
|
||||||
|
|
||||||
|
ok = leveled_penciller:pcl_close(PclClone),
|
||||||
|
ok = leveled_inker:ink_close(InkClone),
|
||||||
|
|
||||||
ok = leveled_bookie:book_close(Bookie1),
|
ok = leveled_bookie:book_close(Bookie1),
|
||||||
% Restart
|
% Restart
|
||||||
{ok, Bookie2} = leveled_bookie:book_start(StartOpts1),
|
{ok, Bookie2} = leveled_bookie:book_start(StartOpts1),
|
||||||
testutil:check_forobject(Bookie2, TestObject),
|
testutil:check_forobject(Bookie2, TestObject),
|
||||||
testutil:check_forlist(Bookie2, ChkList3),
|
testutil:check_forlist(Bookie2, ChkList3),
|
||||||
|
|
||||||
ok = leveled_bookie:book_close(Bookie2),
|
ok = leveled_bookie:book_close(Bookie2),
|
||||||
|
|
||||||
|
StartOpts2 = [{root_path, RootPath},
|
||||||
|
{max_journalsize, 10000000},
|
||||||
|
{max_run_length, 1},
|
||||||
|
{waste_retention_period, 1}],
|
||||||
|
{ok, Bookie3} = leveled_bookie:book_start(StartOpts2),
|
||||||
|
ok = leveled_bookie:book_compactjournal(Bookie3, 30000),
|
||||||
|
testutil:wait_for_compaction(Bookie3),
|
||||||
|
ok = leveled_bookie:book_close(Bookie3),
|
||||||
|
|
||||||
|
{ok, ClearedJournalsPC} = file:list_dir(WasteFP),
|
||||||
|
io:format("~w ClearedJournals found~n", [length(ClearedJournalsPC)]),
|
||||||
|
true = length(ClearedJournalsPC) == 0,
|
||||||
|
|
||||||
testutil:reset_filestructure(10000).
|
testutil:reset_filestructure(10000).
|
||||||
|
|
||||||
|
|
||||||
|
@ -422,7 +457,9 @@ space_clear_ondelete(_Config) ->
|
||||||
no_check,
|
no_check,
|
||||||
G2),
|
G2),
|
||||||
|
|
||||||
{async, F1} = leveled_bookie:book_returnfolder(Book1, {keylist, o_rkv}),
|
FoldKeysFun = fun(B, K, Acc) -> Acc ++ [{B, K}] end,
|
||||||
|
AllKeyQuery = {keylist, o_rkv, {FoldKeysFun, []}},
|
||||||
|
{async, F1} = leveled_bookie:book_returnfolder(Book1, AllKeyQuery),
|
||||||
SW1 = os:timestamp(),
|
SW1 = os:timestamp(),
|
||||||
KL1 = F1(),
|
KL1 = F1(),
|
||||||
ok = case length(KL1) of
|
ok = case length(KL1) of
|
||||||
|
@ -488,7 +525,7 @@ space_clear_ondelete(_Config) ->
|
||||||
"after deletes~n",
|
"after deletes~n",
|
||||||
[PointB_Journals, length(FNsB_L)]),
|
[PointB_Journals, length(FNsB_L)]),
|
||||||
|
|
||||||
{async, F2} = leveled_bookie:book_returnfolder(Book1, {keylist, o_rkv}),
|
{async, F2} = leveled_bookie:book_returnfolder(Book1, AllKeyQuery),
|
||||||
SW3 = os:timestamp(),
|
SW3 = os:timestamp(),
|
||||||
KL2 = F2(),
|
KL2 = F2(),
|
||||||
ok = case length(KL2) of
|
ok = case length(KL2) of
|
||||||
|
@ -500,7 +537,7 @@ space_clear_ondelete(_Config) ->
|
||||||
ok = leveled_bookie:book_close(Book1),
|
ok = leveled_bookie:book_close(Book1),
|
||||||
|
|
||||||
{ok, Book2} = leveled_bookie:book_start(StartOpts1),
|
{ok, Book2} = leveled_bookie:book_start(StartOpts1),
|
||||||
{async, F3} = leveled_bookie:book_returnfolder(Book2, {keylist, o_rkv}),
|
{async, F3} = leveled_bookie:book_returnfolder(Book2, AllKeyQuery),
|
||||||
SW4 = os:timestamp(),
|
SW4 = os:timestamp(),
|
||||||
KL3 = F3(),
|
KL3 = F3(),
|
||||||
ok = case length(KL3) of
|
ok = case length(KL3) of
|
||||||
|
|
|
@ -40,6 +40,26 @@ small_load_with2i(_Config) ->
|
||||||
testutil:check_forlist(Bookie1, ChkList1),
|
testutil:check_forlist(Bookie1, ChkList1),
|
||||||
testutil:check_forobject(Bookie1, TestObject),
|
testutil:check_forobject(Bookie1, TestObject),
|
||||||
|
|
||||||
|
% Find all keys index, and then just the last key
|
||||||
|
IdxQ1 = {index_query,
|
||||||
|
"Bucket",
|
||||||
|
{fun testutil:foldkeysfun/3, []},
|
||||||
|
{"idx1_bin", "#", "~"},
|
||||||
|
{true, undefined}},
|
||||||
|
{async, IdxFolder} = leveled_bookie:book_returnfolder(Bookie1, IdxQ1),
|
||||||
|
KeyList1 = lists:usort(IdxFolder()),
|
||||||
|
true = 10000 == length(KeyList1),
|
||||||
|
{LastTerm, LastKey} = lists:last(KeyList1),
|
||||||
|
IdxQ2 = {index_query,
|
||||||
|
{"Bucket", LastKey},
|
||||||
|
{fun testutil:foldkeysfun/3, []},
|
||||||
|
{"idx1_bin", LastTerm, "~"},
|
||||||
|
{false, undefined}},
|
||||||
|
{async, IdxFolderLK} = leveled_bookie:book_returnfolder(Bookie1, IdxQ2),
|
||||||
|
KeyList2 = lists:usort(IdxFolderLK()),
|
||||||
|
io:format("List should be last key ~w ~w~n", [LastKey, KeyList2]),
|
||||||
|
true = 1 == length(KeyList2),
|
||||||
|
|
||||||
%% Delete the objects from the ChkList removing the indexes
|
%% Delete the objects from the ChkList removing the indexes
|
||||||
lists:foreach(fun({_RN, Obj, Spc}) ->
|
lists:foreach(fun({_RN, Obj, Spc}) ->
|
||||||
DSpc = lists:map(fun({add, F, T}) -> {remove, F, T}
|
DSpc = lists:map(fun({add, F, T}) -> {remove, F, T}
|
||||||
|
@ -75,17 +95,13 @@ small_load_with2i(_Config) ->
|
||||||
true = 9900 == length(KeyHashList2),
|
true = 9900 == length(KeyHashList2),
|
||||||
true = 9900 == length(KeyHashList3),
|
true = 9900 == length(KeyHashList3),
|
||||||
|
|
||||||
SumIntegerFun = fun(_B, _K, V, Acc) ->
|
SumIntFun = fun(_B, _K, V, Acc) ->
|
||||||
[C] = V#r_object.contents,
|
[C] = V#r_object.contents,
|
||||||
{I, _Bin} = C#r_content.value,
|
{I, _Bin} = C#r_content.value,
|
||||||
Acc + I
|
Acc + I
|
||||||
end,
|
end,
|
||||||
{async, Sum1} = leveled_bookie:book_returnfolder(Bookie1,
|
BucketObjQ = {foldobjects_bybucket, ?RIAK_TAG, "Bucket", {SumIntFun, 0}},
|
||||||
{foldobjects_bybucket,
|
{async, Sum1} = leveled_bookie:book_returnfolder(Bookie1, BucketObjQ),
|
||||||
?RIAK_TAG,
|
|
||||||
"Bucket",
|
|
||||||
{SumIntegerFun,
|
|
||||||
0}}),
|
|
||||||
Total1 = Sum1(),
|
Total1 = Sum1(),
|
||||||
true = Total1 > 100000,
|
true = Total1 > 100000,
|
||||||
|
|
||||||
|
@ -93,15 +109,19 @@ small_load_with2i(_Config) ->
|
||||||
|
|
||||||
{ok, Bookie2} = leveled_bookie:book_start(StartOpts1),
|
{ok, Bookie2} = leveled_bookie:book_start(StartOpts1),
|
||||||
|
|
||||||
{async, Sum2} = leveled_bookie:book_returnfolder(Bookie2,
|
{async, Sum2} = leveled_bookie:book_returnfolder(Bookie2, BucketObjQ),
|
||||||
{foldobjects_bybucket,
|
|
||||||
?RIAK_TAG,
|
|
||||||
"Bucket",
|
|
||||||
{SumIntegerFun,
|
|
||||||
0}}),
|
|
||||||
Total2 = Sum2(),
|
Total2 = Sum2(),
|
||||||
true = Total2 == Total1,
|
true = Total2 == Total1,
|
||||||
|
|
||||||
|
FoldBucketsFun = fun(B, Acc) -> sets:add_element(B, Acc) end,
|
||||||
|
% Should not find any buckets - as there is a non-binary bucket, and no
|
||||||
|
% binary ones
|
||||||
|
BucketListQuery = {binary_bucketlist,
|
||||||
|
?RIAK_TAG,
|
||||||
|
{FoldBucketsFun, sets:new()}},
|
||||||
|
{async, BL} = leveled_bookie:book_returnfolder(Bookie2, BucketListQuery),
|
||||||
|
true = sets:size(BL()) == 0,
|
||||||
|
|
||||||
ok = leveled_bookie:book_close(Bookie2),
|
ok = leveled_bookie:book_close(Bookie2),
|
||||||
testutil:reset_filestructure().
|
testutil:reset_filestructure().
|
||||||
|
|
||||||
|
@ -109,7 +129,8 @@ small_load_with2i(_Config) ->
|
||||||
query_count(_Config) ->
|
query_count(_Config) ->
|
||||||
RootPath = testutil:reset_filestructure(),
|
RootPath = testutil:reset_filestructure(),
|
||||||
{ok, Book1} = leveled_bookie:book_start(RootPath, 2000, 50000000),
|
{ok, Book1} = leveled_bookie:book_start(RootPath, 2000, 50000000),
|
||||||
{TestObject, TestSpec} = testutil:generate_testobject("Bucket",
|
BucketBin = list_to_binary("Bucket"),
|
||||||
|
{TestObject, TestSpec} = testutil:generate_testobject(BucketBin,
|
||||||
"Key1",
|
"Key1",
|
||||||
"Value1",
|
"Value1",
|
||||||
[],
|
[],
|
||||||
|
@ -123,7 +144,7 @@ query_count(_Config) ->
|
||||||
Indexes = testutil:get_randomindexes_generator(8),
|
Indexes = testutil:get_randomindexes_generator(8),
|
||||||
SW = os:timestamp(),
|
SW = os:timestamp(),
|
||||||
ObjL1 = testutil:generate_objects(10000,
|
ObjL1 = testutil:generate_objects(10000,
|
||||||
uuid,
|
binary_uuid,
|
||||||
[],
|
[],
|
||||||
V,
|
V,
|
||||||
Indexes),
|
Indexes),
|
||||||
|
@ -137,7 +158,7 @@ query_count(_Config) ->
|
||||||
testutil:check_forobject(Book1, TestObject),
|
testutil:check_forobject(Book1, TestObject),
|
||||||
Total = lists:foldl(fun(X, Acc) ->
|
Total = lists:foldl(fun(X, Acc) ->
|
||||||
IdxF = "idx" ++ integer_to_list(X) ++ "_bin",
|
IdxF = "idx" ++ integer_to_list(X) ++ "_bin",
|
||||||
T = count_termsonindex("Bucket",
|
T = count_termsonindex(BucketBin,
|
||||||
IdxF,
|
IdxF,
|
||||||
Book1,
|
Book1,
|
||||||
?KEY_ONLY),
|
?KEY_ONLY),
|
||||||
|
@ -151,13 +172,13 @@ query_count(_Config) ->
|
||||||
640000 ->
|
640000 ->
|
||||||
ok
|
ok
|
||||||
end,
|
end,
|
||||||
Index1Count = count_termsonindex("Bucket",
|
Index1Count = count_termsonindex(BucketBin,
|
||||||
"idx1_bin",
|
"idx1_bin",
|
||||||
Book1,
|
Book1,
|
||||||
?KEY_ONLY),
|
?KEY_ONLY),
|
||||||
ok = leveled_bookie:book_close(Book1),
|
ok = leveled_bookie:book_close(Book1),
|
||||||
{ok, Book2} = leveled_bookie:book_start(RootPath, 1000, 50000000),
|
{ok, Book2} = leveled_bookie:book_start(RootPath, 1000, 50000000),
|
||||||
Index1Count = count_termsonindex("Bucket",
|
Index1Count = count_termsonindex(BucketBin,
|
||||||
"idx1_bin",
|
"idx1_bin",
|
||||||
Book2,
|
Book2,
|
||||||
?KEY_ONLY),
|
?KEY_ONLY),
|
||||||
|
@ -166,7 +187,7 @@ query_count(_Config) ->
|
||||||
{ok, Regex} = re:compile("[0-9]+" ++
|
{ok, Regex} = re:compile("[0-9]+" ++
|
||||||
Name),
|
Name),
|
||||||
SW = os:timestamp(),
|
SW = os:timestamp(),
|
||||||
T = count_termsonindex("Bucket",
|
T = count_termsonindex(BucketBin,
|
||||||
"idx1_bin",
|
"idx1_bin",
|
||||||
Book2,
|
Book2,
|
||||||
{false,
|
{false,
|
||||||
|
@ -187,25 +208,21 @@ query_count(_Config) ->
|
||||||
ok
|
ok
|
||||||
end,
|
end,
|
||||||
{ok, RegMia} = re:compile("[0-9]+Mia"),
|
{ok, RegMia} = re:compile("[0-9]+Mia"),
|
||||||
|
Query1 = {index_query,
|
||||||
|
BucketBin,
|
||||||
|
{fun testutil:foldkeysfun/3, []},
|
||||||
|
{"idx2_bin", "2000", "2000~"},
|
||||||
|
{false, RegMia}},
|
||||||
{async,
|
{async,
|
||||||
Mia2KFolder1} = leveled_bookie:book_returnfolder(Book2,
|
Mia2KFolder1} = leveled_bookie:book_returnfolder(Book2, Query1),
|
||||||
{index_query,
|
|
||||||
"Bucket",
|
|
||||||
{"idx2_bin",
|
|
||||||
"2000",
|
|
||||||
"2000~"},
|
|
||||||
{false,
|
|
||||||
RegMia}}),
|
|
||||||
Mia2000Count1 = length(Mia2KFolder1()),
|
Mia2000Count1 = length(Mia2KFolder1()),
|
||||||
|
Query2 = {index_query,
|
||||||
|
BucketBin,
|
||||||
|
{fun testutil:foldkeysfun/3, []},
|
||||||
|
{"idx2_bin", "2000", "2001"},
|
||||||
|
{true, undefined}},
|
||||||
{async,
|
{async,
|
||||||
Mia2KFolder2} = leveled_bookie:book_returnfolder(Book2,
|
Mia2KFolder2} = leveled_bookie:book_returnfolder(Book2, Query2),
|
||||||
{index_query,
|
|
||||||
"Bucket",
|
|
||||||
{"idx2_bin",
|
|
||||||
"2000",
|
|
||||||
"2001"},
|
|
||||||
{true,
|
|
||||||
undefined}}),
|
|
||||||
Mia2000Count2 = lists:foldl(fun({Term, _Key}, Acc) ->
|
Mia2000Count2 = lists:foldl(fun({Term, _Key}, Acc) ->
|
||||||
case re:run(Term, RegMia) of
|
case re:run(Term, RegMia) of
|
||||||
nomatch ->
|
nomatch ->
|
||||||
|
@ -222,29 +239,30 @@ query_count(_Config) ->
|
||||||
ok
|
ok
|
||||||
end,
|
end,
|
||||||
{ok, RxMia2K} = re:compile("^2000[0-9]+Mia"),
|
{ok, RxMia2K} = re:compile("^2000[0-9]+Mia"),
|
||||||
|
Query3 = {index_query,
|
||||||
|
BucketBin,
|
||||||
|
{fun testutil:foldkeysfun/3, []},
|
||||||
|
{"idx2_bin", "1980", "2100"},
|
||||||
|
{false, RxMia2K}},
|
||||||
{async,
|
{async,
|
||||||
Mia2KFolder3} = leveled_bookie:book_returnfolder(Book2,
|
Mia2KFolder3} = leveled_bookie:book_returnfolder(Book2, Query3),
|
||||||
{index_query,
|
|
||||||
"Bucket",
|
|
||||||
{"idx2_bin",
|
|
||||||
"1980",
|
|
||||||
"2100"},
|
|
||||||
{false,
|
|
||||||
RxMia2K}}),
|
|
||||||
Mia2000Count1 = length(Mia2KFolder3()),
|
Mia2000Count1 = length(Mia2KFolder3()),
|
||||||
|
|
||||||
V9 = testutil:get_compressiblevalue(),
|
V9 = testutil:get_compressiblevalue(),
|
||||||
Indexes9 = testutil:get_randomindexes_generator(8),
|
Indexes9 = testutil:get_randomindexes_generator(8),
|
||||||
[{_RN, Obj9, Spc9}] = testutil:generate_objects(1, uuid, [], V9, Indexes9),
|
[{_RN, Obj9, Spc9}] = testutil:generate_objects(1,
|
||||||
|
binary_uuid,
|
||||||
|
[],
|
||||||
|
V9,
|
||||||
|
Indexes9),
|
||||||
ok = testutil:book_riakput(Book2, Obj9, Spc9),
|
ok = testutil:book_riakput(Book2, Obj9, Spc9),
|
||||||
R9 = lists:map(fun({add, IdxF, IdxT}) ->
|
R9 = lists:map(fun({add, IdxF, IdxT}) ->
|
||||||
R = leveled_bookie:book_returnfolder(Book2,
|
Q = {index_query,
|
||||||
{index_query,
|
BucketBin,
|
||||||
"Bucket",
|
{fun testutil:foldkeysfun/3, []},
|
||||||
{IdxF,
|
{IdxF, IdxT, IdxT},
|
||||||
IdxT,
|
?KEY_ONLY},
|
||||||
IdxT},
|
R = leveled_bookie:book_returnfolder(Book2, Q),
|
||||||
?KEY_ONLY}),
|
|
||||||
{async, Fldr} = R,
|
{async, Fldr} = R,
|
||||||
case length(Fldr()) of
|
case length(Fldr()) of
|
||||||
X when X > 0 ->
|
X when X > 0 ->
|
||||||
|
@ -256,13 +274,12 @@ query_count(_Config) ->
|
||||||
Spc9),
|
Spc9),
|
||||||
ok = testutil:book_riakput(Book2, Obj9, Spc9Del),
|
ok = testutil:book_riakput(Book2, Obj9, Spc9Del),
|
||||||
lists:foreach(fun({IdxF, IdxT, X}) ->
|
lists:foreach(fun({IdxF, IdxT, X}) ->
|
||||||
R = leveled_bookie:book_returnfolder(Book2,
|
Q = {index_query,
|
||||||
{index_query,
|
BucketBin,
|
||||||
"Bucket",
|
{fun testutil:foldkeysfun/3, []},
|
||||||
{IdxF,
|
{IdxF, IdxT, IdxT},
|
||||||
IdxT,
|
?KEY_ONLY},
|
||||||
IdxT},
|
R = leveled_bookie:book_returnfolder(Book2, Q),
|
||||||
?KEY_ONLY}),
|
|
||||||
{async, Fldr} = R,
|
{async, Fldr} = R,
|
||||||
case length(Fldr()) of
|
case length(Fldr()) of
|
||||||
Y ->
|
Y ->
|
||||||
|
@ -273,13 +290,12 @@ query_count(_Config) ->
|
||||||
ok = leveled_bookie:book_close(Book2),
|
ok = leveled_bookie:book_close(Book2),
|
||||||
{ok, Book3} = leveled_bookie:book_start(RootPath, 2000, 50000000),
|
{ok, Book3} = leveled_bookie:book_start(RootPath, 2000, 50000000),
|
||||||
lists:foreach(fun({IdxF, IdxT, X}) ->
|
lists:foreach(fun({IdxF, IdxT, X}) ->
|
||||||
R = leveled_bookie:book_returnfolder(Book3,
|
Q = {index_query,
|
||||||
{index_query,
|
BucketBin,
|
||||||
"Bucket",
|
{fun testutil:foldkeysfun/3, []},
|
||||||
{IdxF,
|
{IdxF, IdxT, IdxT},
|
||||||
IdxT,
|
?KEY_ONLY},
|
||||||
IdxT},
|
R = leveled_bookie:book_returnfolder(Book3, Q),
|
||||||
?KEY_ONLY}),
|
|
||||||
{async, Fldr} = R,
|
{async, Fldr} = R,
|
||||||
case length(Fldr()) of
|
case length(Fldr()) of
|
||||||
Y ->
|
Y ->
|
||||||
|
@ -291,13 +307,12 @@ query_count(_Config) ->
|
||||||
ok = leveled_bookie:book_close(Book3),
|
ok = leveled_bookie:book_close(Book3),
|
||||||
{ok, Book4} = leveled_bookie:book_start(RootPath, 2000, 50000000),
|
{ok, Book4} = leveled_bookie:book_start(RootPath, 2000, 50000000),
|
||||||
lists:foreach(fun({IdxF, IdxT, X}) ->
|
lists:foreach(fun({IdxF, IdxT, X}) ->
|
||||||
R = leveled_bookie:book_returnfolder(Book4,
|
Q = {index_query,
|
||||||
{index_query,
|
BucketBin,
|
||||||
"Bucket",
|
{fun testutil:foldkeysfun/3, []},
|
||||||
{IdxF,
|
{IdxF, IdxT, IdxT},
|
||||||
IdxT,
|
?KEY_ONLY},
|
||||||
IdxT},
|
R = leveled_bookie:book_returnfolder(Book4, Q),
|
||||||
?KEY_ONLY}),
|
|
||||||
{async, Fldr} = R,
|
{async, Fldr} = R,
|
||||||
case length(Fldr()) of
|
case length(Fldr()) of
|
||||||
X ->
|
X ->
|
||||||
|
@ -306,7 +321,60 @@ query_count(_Config) ->
|
||||||
end,
|
end,
|
||||||
R9),
|
R9),
|
||||||
testutil:check_forobject(Book4, TestObject),
|
testutil:check_forobject(Book4, TestObject),
|
||||||
|
|
||||||
|
FoldBucketsFun = fun(B, Acc) -> sets:add_element(B, Acc) end,
|
||||||
|
BucketListQuery = {binary_bucketlist,
|
||||||
|
?RIAK_TAG,
|
||||||
|
{FoldBucketsFun, sets:new()}},
|
||||||
|
{async, BLF1} = leveled_bookie:book_returnfolder(Book4, BucketListQuery),
|
||||||
|
SW_QA = os:timestamp(),
|
||||||
|
BucketSet1 = BLF1(),
|
||||||
|
io:format("Bucket set returned in ~w microseconds",
|
||||||
|
[timer:now_diff(os:timestamp(), SW_QA)]),
|
||||||
|
|
||||||
|
true = sets:size(BucketSet1) == 1,
|
||||||
|
true = sets:is_element(list_to_binary("Bucket"), BucketSet1),
|
||||||
|
|
||||||
|
ObjList10A = testutil:generate_objects(5000,
|
||||||
|
binary_uuid,
|
||||||
|
[],
|
||||||
|
V9,
|
||||||
|
Indexes9,
|
||||||
|
"BucketA"),
|
||||||
|
ObjList10B = testutil:generate_objects(5000,
|
||||||
|
binary_uuid,
|
||||||
|
[],
|
||||||
|
V9,
|
||||||
|
Indexes9,
|
||||||
|
"BucketB"),
|
||||||
|
ObjList10C = testutil:generate_objects(5000,
|
||||||
|
binary_uuid,
|
||||||
|
[],
|
||||||
|
V9,
|
||||||
|
Indexes9,
|
||||||
|
"BucketC"),
|
||||||
|
testutil:riakload(Book4, ObjList10A),
|
||||||
|
testutil:riakload(Book4, ObjList10B),
|
||||||
|
testutil:riakload(Book4, ObjList10C),
|
||||||
|
{async, BLF2} = leveled_bookie:book_returnfolder(Book4, BucketListQuery),
|
||||||
|
SW_QB = os:timestamp(),
|
||||||
|
BucketSet2 = BLF2(),
|
||||||
|
io:format("Bucket set returned in ~w microseconds",
|
||||||
|
[timer:now_diff(os:timestamp(), SW_QB)]),
|
||||||
|
true = sets:size(BucketSet2) == 4,
|
||||||
|
|
||||||
ok = leveled_bookie:book_close(Book4),
|
ok = leveled_bookie:book_close(Book4),
|
||||||
|
|
||||||
|
{ok, Book5} = leveled_bookie:book_start(RootPath, 2000, 50000000),
|
||||||
|
{async, BLF3} = leveled_bookie:book_returnfolder(Book5, BucketListQuery),
|
||||||
|
SW_QC = os:timestamp(),
|
||||||
|
BucketSet3 = BLF3(),
|
||||||
|
io:format("Bucket set returned in ~w microseconds",
|
||||||
|
[timer:now_diff(os:timestamp(), SW_QC)]),
|
||||||
|
true = sets:size(BucketSet3) == 4,
|
||||||
|
|
||||||
|
ok = leveled_bookie:book_close(Book5),
|
||||||
|
|
||||||
testutil:reset_filestructure().
|
testutil:reset_filestructure().
|
||||||
|
|
||||||
|
|
||||||
|
@ -316,13 +384,12 @@ count_termsonindex(Bucket, IdxField, Book, QType) ->
|
||||||
SW = os:timestamp(),
|
SW = os:timestamp(),
|
||||||
ST = integer_to_list(X),
|
ST = integer_to_list(X),
|
||||||
ET = ST ++ "~",
|
ET = ST ++ "~",
|
||||||
R = leveled_bookie:book_returnfolder(Book,
|
Q = {index_query,
|
||||||
{index_query,
|
Bucket,
|
||||||
Bucket,
|
{fun testutil:foldkeysfun/3, []},
|
||||||
{IdxField,
|
{IdxField, ST, ET},
|
||||||
ST,
|
QType},
|
||||||
ET},
|
R = leveled_bookie:book_returnfolder(Book, Q),
|
||||||
QType}),
|
|
||||||
{async, Folder} = R,
|
{async, Folder} = R,
|
||||||
Items = length(Folder()),
|
Items = length(Folder()),
|
||||||
io:format("2i query from term ~s on index ~s took " ++
|
io:format("2i query from term ~s on index ~s took " ++
|
||||||
|
|
|
@ -3,12 +3,14 @@
|
||||||
-include("include/leveled.hrl").
|
-include("include/leveled.hrl").
|
||||||
-export([all/0]).
|
-export([all/0]).
|
||||||
-export([retain_strategy/1,
|
-export([retain_strategy/1,
|
||||||
|
recovr_strategy/1,
|
||||||
aae_bustedjournal/1,
|
aae_bustedjournal/1,
|
||||||
journal_compaction_bustedjournal/1
|
journal_compaction_bustedjournal/1
|
||||||
]).
|
]).
|
||||||
|
|
||||||
all() -> [
|
all() -> [
|
||||||
retain_strategy,
|
retain_strategy,
|
||||||
|
recovr_strategy,
|
||||||
aae_bustedjournal,
|
aae_bustedjournal,
|
||||||
journal_compaction_bustedjournal
|
journal_compaction_bustedjournal
|
||||||
].
|
].
|
||||||
|
@ -40,6 +42,50 @@ retain_strategy(_Config) ->
|
||||||
testutil:reset_filestructure().
|
testutil:reset_filestructure().
|
||||||
|
|
||||||
|
|
||||||
|
recovr_strategy(_Config) ->
|
||||||
|
RootPath = testutil:reset_filestructure(),
|
||||||
|
BookOpts = [{root_path, RootPath},
|
||||||
|
{cache_size, 1000},
|
||||||
|
{max_journalsize, 5000000},
|
||||||
|
{reload_strategy, [{?RIAK_TAG, recovr}]}],
|
||||||
|
|
||||||
|
R6 = rotating_object_check(BookOpts, "Bucket6", 6400),
|
||||||
|
{ok, AllSpcL, V4} = R6,
|
||||||
|
leveled_penciller:clean_testdir(proplists:get_value(root_path, BookOpts) ++
|
||||||
|
"/ledger"),
|
||||||
|
{ok, Book1} = leveled_bookie:book_start(BookOpts),
|
||||||
|
|
||||||
|
{TestObject, TestSpec} = testutil:generate_testobject(),
|
||||||
|
ok = testutil:book_riakput(Book1, TestObject, TestSpec),
|
||||||
|
ok = testutil:book_riakdelete(Book1,
|
||||||
|
TestObject#r_object.bucket,
|
||||||
|
TestObject#r_object.key,
|
||||||
|
[]),
|
||||||
|
|
||||||
|
lists:foreach(fun({K, _SpcL}) ->
|
||||||
|
{ok, OH} = testutil:book_riakhead(Book1, "Bucket6", K),
|
||||||
|
K = OH#r_object.key,
|
||||||
|
{ok, OG} = testutil:book_riakget(Book1, "Bucket6", K),
|
||||||
|
V = testutil:get_value(OG),
|
||||||
|
true = V == V4
|
||||||
|
end,
|
||||||
|
lists:nthtail(6400, AllSpcL)),
|
||||||
|
Q = fun(RT) -> {index_query,
|
||||||
|
"Bucket6",
|
||||||
|
{fun testutil:foldkeysfun/3, []},
|
||||||
|
{"idx1_bin", "#", "~"},
|
||||||
|
{RT, undefined}}
|
||||||
|
end,
|
||||||
|
{async, TFolder} = leveled_bookie:book_returnfolder(Book1, Q(true)),
|
||||||
|
KeyTermList = TFolder(),
|
||||||
|
{async, KFolder} = leveled_bookie:book_returnfolder(Book1, Q(false)),
|
||||||
|
KeyList = lists:usort(KFolder()),
|
||||||
|
io:format("KeyList ~w KeyTermList ~w~n",
|
||||||
|
[length(KeyList), length(KeyTermList)]),
|
||||||
|
true = length(KeyList) == 6400,
|
||||||
|
true = length(KeyList) < length(KeyTermList),
|
||||||
|
true = length(KeyTermList) < 25600.
|
||||||
|
|
||||||
|
|
||||||
aae_bustedjournal(_Config) ->
|
aae_bustedjournal(_Config) ->
|
||||||
RootPath = testutil:reset_filestructure(),
|
RootPath = testutil:reset_filestructure(),
|
||||||
|
@ -59,8 +105,9 @@ aae_bustedjournal(_Config) ->
|
||||||
testutil:corrupt_journal(RootPath, HeadF, 1000, 2048, 1000),
|
testutil:corrupt_journal(RootPath, HeadF, 1000, 2048, 1000),
|
||||||
{ok, Bookie2} = leveled_bookie:book_start(StartOpts),
|
{ok, Bookie2} = leveled_bookie:book_start(StartOpts),
|
||||||
|
|
||||||
{async, KeyF} = leveled_bookie:book_returnfolder(Bookie2,
|
FoldKeysFun = fun(B, K, Acc) -> Acc ++ [{B, K}] end,
|
||||||
{keylist, ?RIAK_TAG}),
|
AllKeyQuery = {keylist, o_rkv, {FoldKeysFun, []}},
|
||||||
|
{async, KeyF} = leveled_bookie:book_returnfolder(Bookie2, AllKeyQuery),
|
||||||
KeyList = KeyF(),
|
KeyList = KeyF(),
|
||||||
20001 = length(KeyList),
|
20001 = length(KeyList),
|
||||||
HeadCount = lists:foldl(fun({B, K}, Acc) ->
|
HeadCount = lists:foldl(fun({B, K}, Acc) ->
|
||||||
|
|
|
@ -39,7 +39,9 @@
|
||||||
restore_file/2,
|
restore_file/2,
|
||||||
restore_topending/2,
|
restore_topending/2,
|
||||||
find_journals/1,
|
find_journals/1,
|
||||||
riak_hash/1]).
|
riak_hash/1,
|
||||||
|
wait_for_compaction/1,
|
||||||
|
foldkeysfun/3]).
|
||||||
|
|
||||||
-define(RETURN_TERMS, {true, undefined}).
|
-define(RETURN_TERMS, {true, undefined}).
|
||||||
-define(SLOWOFFER_DELAY, 5).
|
-define(SLOWOFFER_DELAY, 5).
|
||||||
|
@ -85,7 +87,20 @@ reset_filestructure(Wait) ->
|
||||||
leveled_penciller:clean_testdir(RootPath ++ "/ledger"),
|
leveled_penciller:clean_testdir(RootPath ++ "/ledger"),
|
||||||
RootPath.
|
RootPath.
|
||||||
|
|
||||||
|
wait_for_compaction(Bookie) ->
|
||||||
|
F = fun leveled_bookie:book_islastcompactionpending/1,
|
||||||
|
lists:foldl(fun(X, Pending) ->
|
||||||
|
case Pending of
|
||||||
|
false ->
|
||||||
|
false;
|
||||||
|
true ->
|
||||||
|
io:format("Loop ~w waiting for journal "
|
||||||
|
++ "compaction to complete~n", [X]),
|
||||||
|
timer:sleep(5000),
|
||||||
|
F(Bookie)
|
||||||
|
end end,
|
||||||
|
true,
|
||||||
|
lists:seq(1, 15)).
|
||||||
|
|
||||||
check_bucket_stats(Bookie, Bucket) ->
|
check_bucket_stats(Bookie, Bucket) ->
|
||||||
FoldSW1 = os:timestamp(),
|
FoldSW1 = os:timestamp(),
|
||||||
|
@ -216,6 +231,17 @@ generate_objects(Count, KeyNumber, ObjL, Value, IndexGen) ->
|
||||||
|
|
||||||
generate_objects(0, _KeyNumber, ObjL, _Value, _IndexGen, _Bucket) ->
|
generate_objects(0, _KeyNumber, ObjL, _Value, _IndexGen, _Bucket) ->
|
||||||
ObjL;
|
ObjL;
|
||||||
|
generate_objects(Count, binary_uuid, ObjL, Value, IndexGen, Bucket) ->
|
||||||
|
{Obj1, Spec1} = set_object(list_to_binary(Bucket),
|
||||||
|
list_to_binary(leveled_codec:generate_uuid()),
|
||||||
|
Value,
|
||||||
|
IndexGen),
|
||||||
|
generate_objects(Count - 1,
|
||||||
|
binary_uuid,
|
||||||
|
ObjL ++ [{random:uniform(), Obj1, Spec1}],
|
||||||
|
Value,
|
||||||
|
IndexGen,
|
||||||
|
Bucket);
|
||||||
generate_objects(Count, uuid, ObjL, Value, IndexGen, Bucket) ->
|
generate_objects(Count, uuid, ObjL, Value, IndexGen, Bucket) ->
|
||||||
{Obj1, Spec1} = set_object(Bucket,
|
{Obj1, Spec1} = set_object(Bucket,
|
||||||
leveled_codec:generate_uuid(),
|
leveled_codec:generate_uuid(),
|
||||||
|
@ -314,6 +340,8 @@ get_randomdate() ->
|
||||||
[Year, Month, Day, Hour, Minute, Second])).
|
[Year, Month, Day, Hour, Minute, Second])).
|
||||||
|
|
||||||
|
|
||||||
|
foldkeysfun(_Bucket, Item, Acc) -> Acc ++ [Item].
|
||||||
|
|
||||||
check_indexed_objects(Book, B, KSpecL, V) ->
|
check_indexed_objects(Book, B, KSpecL, V) ->
|
||||||
% Check all objects match, return what should be the results of an all
|
% Check all objects match, return what should be the results of an all
|
||||||
% index query
|
% index query
|
||||||
|
@ -329,6 +357,7 @@ check_indexed_objects(Book, B, KSpecL, V) ->
|
||||||
R = leveled_bookie:book_returnfolder(Book,
|
R = leveled_bookie:book_returnfolder(Book,
|
||||||
{index_query,
|
{index_query,
|
||||||
B,
|
B,
|
||||||
|
{fun foldkeysfun/3, []},
|
||||||
{"idx1_bin",
|
{"idx1_bin",
|
||||||
"0",
|
"0",
|
||||||
"~"},
|
"~"},
|
||||||
|
@ -391,7 +420,10 @@ put_altered_indexed_objects(Book, Bucket, KSpecL, RemoveOld2i) ->
|
||||||
V,
|
V,
|
||||||
IndexGen,
|
IndexGen,
|
||||||
AddSpc),
|
AddSpc),
|
||||||
ok = book_riakput(Book, O, AltSpc),
|
case book_riakput(Book, O, AltSpc) of
|
||||||
|
ok -> ok;
|
||||||
|
pause -> timer:sleep(?SLOWOFFER_DELAY)
|
||||||
|
end,
|
||||||
{K, AltSpc} end,
|
{K, AltSpc} end,
|
||||||
KSpecL),
|
KSpecL),
|
||||||
{RplKSpecL, V}.
|
{RplKSpecL, V}.
|
||||||
|
@ -411,6 +443,9 @@ rotating_object_check(RootPath, B, NumberOfObjects) ->
|
||||||
ok = testutil:check_indexed_objects(Book2, B, KSpcL3, V3),
|
ok = testutil:check_indexed_objects(Book2, B, KSpcL3, V3),
|
||||||
{KSpcL4, V4} = testutil:put_altered_indexed_objects(Book2, B, KSpcL3),
|
{KSpcL4, V4} = testutil:put_altered_indexed_objects(Book2, B, KSpcL3),
|
||||||
ok = testutil:check_indexed_objects(Book2, B, KSpcL4, V4),
|
ok = testutil:check_indexed_objects(Book2, B, KSpcL4, V4),
|
||||||
|
Query = {keylist, ?RIAK_TAG, B, {fun foldkeysfun/3, []}},
|
||||||
|
{async, BList} = leveled_bookie:book_returnfolder(Book2, Query),
|
||||||
|
true = NumberOfObjects == length(BList()),
|
||||||
ok = leveled_bookie:book_close(Book2),
|
ok = leveled_bookie:book_close(Book2),
|
||||||
ok.
|
ok.
|
||||||
|
|
||||||
|
|
21
test/volume/examples/eleveldb_load.config
Normal file
21
test/volume/examples/eleveldb_load.config
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
{mode, max}.
|
||||||
|
|
||||||
|
{duration, 30}.
|
||||||
|
|
||||||
|
{concurrent, 24}.
|
||||||
|
|
||||||
|
{driver, basho_bench_driver_eleveldb}.
|
||||||
|
|
||||||
|
{key_generator, {int_to_bin_bigendian,{uniform_int, 1000000}}}.
|
||||||
|
|
||||||
|
{value_generator, {fixed_bin, 8000}}.
|
||||||
|
|
||||||
|
{operations, [{get, 5}, {put, 1}]}.
|
||||||
|
|
||||||
|
%% the second element in the list below (e.g., "../../public/eleveldb") must
|
||||||
|
%% point to the relevant directory of a eleveldb installation
|
||||||
|
{code_paths, ["../eleveldb/ebin"]}.
|
||||||
|
|
||||||
|
{eleveldb_dir, "/tmp/eleveldb.bench"}.
|
||||||
|
{eleveldb_num_instances, 12}.
|
||||||
|
|
21
test/volume/examples/eleveldb_pop.config
Normal file
21
test/volume/examples/eleveldb_pop.config
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
{mode, max}.
|
||||||
|
|
||||||
|
{duration, 30}.
|
||||||
|
|
||||||
|
{concurrent, 24}.
|
||||||
|
|
||||||
|
{driver, basho_bench_driver_eleveldb}.
|
||||||
|
|
||||||
|
{key_generator, {int_to_bin_bigendian,{partitioned_sequential_int, 10000000}}}.
|
||||||
|
|
||||||
|
{value_generator, {fixed_bin, 8000}}.
|
||||||
|
|
||||||
|
{operations, [{put, 1}]}.
|
||||||
|
|
||||||
|
%% the second element in the list below (e.g., "../../public/eleveldb") must
|
||||||
|
%% point to the relevant directory of a eleveldb installation
|
||||||
|
{code_paths, ["../eleveldb/ebin"]}.
|
||||||
|
|
||||||
|
{eleveldb_dir, "/tmp/eleveldb.bench"}.
|
||||||
|
{eleveldb_num_instances, 12}.
|
||||||
|
|
21
test/volume/examples/eleveleddb_load.config
Normal file
21
test/volume/examples/eleveleddb_load.config
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
{mode, max}.
|
||||||
|
|
||||||
|
{duration, 30}.
|
||||||
|
|
||||||
|
{concurrent, 24}.
|
||||||
|
|
||||||
|
{driver, basho_bench_driver_eleveleddb}.
|
||||||
|
|
||||||
|
{key_generator, {int_to_bin_bigendian,{uniform_int, 1000000}}}.
|
||||||
|
|
||||||
|
{value_generator, {fixed_bin, 8000}}.
|
||||||
|
|
||||||
|
{operations, [{get, 5}, {put, 1}]}.
|
||||||
|
|
||||||
|
%% the second element in the list below (e.g., "../../public/eleveldb") must
|
||||||
|
%% point to the relevant directory of a eleveldb installation
|
||||||
|
{code_paths, ["../eleveleddb/_build/default/lib/eleveleddb/ebin"]}.
|
||||||
|
|
||||||
|
{eleveleddb_dir, "/tmp/eleveleddb.bench"}.
|
||||||
|
{eleveleddb_num_instances, 12}.
|
||||||
|
|
21
test/volume/examples/eleveleddb_pop.config
Normal file
21
test/volume/examples/eleveleddb_pop.config
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
{mode, max}.
|
||||||
|
|
||||||
|
{duration, 30}.
|
||||||
|
|
||||||
|
{concurrent, 24}.
|
||||||
|
|
||||||
|
{driver, basho_bench_driver_eleveleddb}.
|
||||||
|
|
||||||
|
{key_generator, {int_to_bin_bigendian,{partitioned_sequential_int, 10000000}}}.
|
||||||
|
|
||||||
|
{value_generator, {fixed_bin, 8000}}.
|
||||||
|
|
||||||
|
{operations, [{put, 1}]}.
|
||||||
|
|
||||||
|
%% the second element in the list below (e.g., "../../public/eleveldb") must
|
||||||
|
%% point to the relevant directory of a eleveleddb installation
|
||||||
|
{code_paths, ["../eleveleddb/_build/default/lib/eleveleddb/ebin"]}.
|
||||||
|
|
||||||
|
{eleveleddb_dir, "/tmp/eleveleddb.bench"}.
|
||||||
|
{eleveleddb_num_instances, 12}.
|
||||||
|
|
BIN
test/volume/output/leveldb_load.png
Normal file
BIN
test/volume/output/leveldb_load.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 316 KiB |
BIN
test/volume/output/leveldb_pop.png
Normal file
BIN
test/volume/output/leveldb_pop.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 315 KiB |
BIN
test/volume/output/leveled_load.png
Normal file
BIN
test/volume/output/leveled_load.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 333 KiB |
BIN
test/volume/output/leveled_pop.png
Normal file
BIN
test/volume/output/leveled_pop.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 274 KiB |
93
test/volume/src/basho_bench_driver_eleveleddb.erl
Normal file
93
test/volume/src/basho_bench_driver_eleveleddb.erl
Normal file
|
@ -0,0 +1,93 @@
|
||||||
|
%% -------------------------------------------------------------------
|
||||||
|
%%
|
||||||
|
%% Copyright (c) 2015 Basho Techonologies
|
||||||
|
%%
|
||||||
|
%% This file is provided to you under the Apache License,
|
||||||
|
%% Version 2.0 (the "License"); you may not use this file
|
||||||
|
%% except in compliance with the License. You may obtain
|
||||||
|
%% a copy of the License at
|
||||||
|
%%
|
||||||
|
%% http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
%%
|
||||||
|
%% Unless required by applicable law or agreed to in writing,
|
||||||
|
%% software distributed under the License is distributed on an
|
||||||
|
%% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||||
|
%% KIND, either express or implied. See the License for the
|
||||||
|
%% specific language governing permissions and limitations
|
||||||
|
%% under the License.
|
||||||
|
%%
|
||||||
|
%% -------------------------------------------------------------------
|
||||||
|
|
||||||
|
%% Raw eleveldb driver. It opens a number of eleveldb instances and assigns
|
||||||
|
%% one to each created worker in round robin fashion. So, for example, creating
|
||||||
|
%% 32 instances and 64 concurrent workers would bind a pair of workers to
|
||||||
|
%% each instance for all operations.
|
||||||
|
-module(basho_bench_driver_eleveleddb).
|
||||||
|
|
||||||
|
-export([new/1,
|
||||||
|
run/4]).
|
||||||
|
|
||||||
|
% -include("basho_bench.hrl").
|
||||||
|
|
||||||
|
-record(state, {
|
||||||
|
instance
|
||||||
|
}).
|
||||||
|
|
||||||
|
get_instances() ->
|
||||||
|
case basho_bench_config:get(eleveleddb_instances, undefined) of
|
||||||
|
undefined ->
|
||||||
|
Instances = start_instances(),
|
||||||
|
% ?INFO("Instances started ~w~n", [Instances]),
|
||||||
|
basho_bench_config:set(eleveleddb_instances, Instances),
|
||||||
|
Instances;
|
||||||
|
Instances ->
|
||||||
|
Instances
|
||||||
|
end.
|
||||||
|
|
||||||
|
|
||||||
|
start_instances() ->
|
||||||
|
BaseDir = basho_bench_config:get(eleveleddb_dir, "."),
|
||||||
|
Num = basho_bench_config:get(eleveleddb_num_instances, 1),
|
||||||
|
% ?INFO("Starting up ~p eleveleddb instances under ~s .\n",
|
||||||
|
% [Num, BaseDir]),
|
||||||
|
Refs = [begin
|
||||||
|
Dir = filename:join(BaseDir, "instance." ++ integer_to_list(N)),
|
||||||
|
% ?INFO("Opening eleveleddb instance in ~s\n", [Dir]),
|
||||||
|
{ok, Ref} = leveled_bookie:book_start(Dir, 2000, 500000000),
|
||||||
|
Ref
|
||||||
|
end || N <- lists:seq(1, Num)],
|
||||||
|
list_to_tuple(Refs).
|
||||||
|
|
||||||
|
new(Id) ->
|
||||||
|
Instances = get_instances(),
|
||||||
|
Count = size(Instances),
|
||||||
|
Idx = ((Id - 1) rem Count) + 1,
|
||||||
|
% ?INFO("Worker ~p using instance ~p.\n", [Id, Idx]),
|
||||||
|
State = #state{instance = element(Idx, Instances)},
|
||||||
|
{ok, State}.
|
||||||
|
|
||||||
|
|
||||||
|
run(get, KeyGen, _ValueGen, State = #state{instance = Ref}) ->
|
||||||
|
Key = KeyGen(),
|
||||||
|
case leveled_bookie:book_get(Ref, "PerfBucket", Key, o) of
|
||||||
|
{ok, _Value} ->
|
||||||
|
{ok, State};
|
||||||
|
not_found ->
|
||||||
|
{ok, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end;
|
||||||
|
run(put, KeyGen, ValGen, State = #state{instance = Ref}) ->
|
||||||
|
Key = KeyGen(),
|
||||||
|
Value = ValGen(),
|
||||||
|
case leveled_bookie:book_put(Ref, "PerfBucket", Key, Value, []) of
|
||||||
|
ok ->
|
||||||
|
{ok, State};
|
||||||
|
pause ->
|
||||||
|
timer:sleep(1000),
|
||||||
|
{ok, State};
|
||||||
|
{error, Reason} ->
|
||||||
|
{error, Reason}
|
||||||
|
end.
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue