More conservative approach to ongoing work monitoring

As per comments though - if we auto-restart pclerk in the future this
will have to be re-considered.

Perhaps a re-starting pclerk should force some reset of this boolean on
startup perhaps by making a different work_for_clerk if in a virgin
state.
This commit is contained in:
martinsumner 2017-02-09 23:49:12 +00:00
parent 793977b76c
commit 8077c70486

View file

@ -225,14 +225,16 @@ do_merge(KL1, KL2, SinkLevel, SinkB, RP, NewSQN, MaxSQN, Additions) ->
return_deletions(ManifestSQN, PendingDeletionD) ->
case dict:find(ManifestSQN, PendingDeletionD) of
{ok, PendingDeletions} ->
% The returning of deletions had been seperated out as a failure to fetch
% here had caased crashes of the clerk. The root cause of the failure to
% fetch was the same clerk being asked to do the same work twice - and this
% should be blocked now by the ongoing_work boolean in the Penciller
% LoopData
%
% So this is now allowed to crash again
PendingDeletions = dict:fetch(ManifestSQN, PendingDeletionD),
leveled_log:log("PC021", [ManifestSQN]),
{PendingDeletions, dict:erase(ManifestSQN, PendingDeletionD)};
error ->
leveled_log:log("PC020", [ManifestSQN]),
{[], PendingDeletionD}
end.
{PendingDeletions, dict:erase(ManifestSQN, PendingDeletionD)}.
%%%============================================================================
%%% Test
@ -240,13 +242,6 @@ return_deletions(ManifestSQN, PendingDeletionD) ->
-ifdef(TEST).
return_deletions_test() ->
% During volume tests there would occasionaly be a deletion prompt with
% an empty pending deletions dictionary. Don't understand why this would
% happen - so we check here that at least it does not kill the clerk
R = {[], dict:new()},
?assertMatch(R, return_deletions(20, dict:new())).
generate_randomkeys(Count, BucketRangeLow, BucketRangeHigh) ->
generate_randomkeys(Count, [], BucketRangeLow, BucketRangeHigh).