Experiment with smaller scan width
When testing with large numbers of 2i terms (and hence more Riak Metadata), there is a surge in slow response times when there are multiple concurrent merge events. This could be veyr short term CPU starvation because of the merge process. Perhaps it is delays waiting for the scan to complete - smaller scanwidth may mena more interleaving and less latency?
This commit is contained in:
parent
c787e0cd78
commit
54534e725f
1 changed files with 1 additions and 2 deletions
|
@ -69,8 +69,7 @@
|
|||
-define(COMPRESSION_LEVEL, 1).
|
||||
-define(BINARY_SETTINGS, [{compressed, ?COMPRESSION_LEVEL}]).
|
||||
% -define(LEVEL_BLOOM_BITS, [{0, 8}, {1, 10}, {2, 8}, {default, 6}]).
|
||||
-define(MERGE_SCANWIDTH, 16).
|
||||
-define(INDEX_MARKER_WIDTH, 16).
|
||||
-define(MERGE_SCANWIDTH, 4).
|
||||
-define(DISCARD_EXT, ".discarded").
|
||||
-define(DELETE_TIMEOUT, 10000).
|
||||
-define(TREE_TYPE, idxt).
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue