Add nonsense tests for nonsense coverage on standard methods.
Look at CDB search_hash_table - looks like it doubled-up on break-outs
so that one would never get hit
Leveled will now signal the need for a pause due to back-pressure, but
not actually pause itself. The hope is that in a riak implementation
this pause can be managed by the put_fsm, and so not lock the store.
Clena the API of Riak specific methods, and also resolve timing issue in
simple_server unit test. Previously this would end up with missing data
(and a lower sequence number after start) because of the penciller_clerk
timeout being relatively large in the context of this test. Now the
timeout has bene reduced the L0 slot is cleared by the time of the
close. To make sure an extra sleep has been added as a precaution to
avoid any intermittent issues.
When there is write pressure on the penciller and it returns to the
bookie, the bookie will now punish the next PUT (and itself) with a
pause. The longer the back-pressure state has been in place, the more
frequent the pauses
Previously under heavy load, as long as L0 was being cleared, the ledger
woud keep accapting. Now there is a formla limit on how far behind the
work queue (of compactions required at other levels) before the break is
applied on new updates coming in).
Broekn by change to get response on L0 completion, SFT was informing
penciller of the filename passed in (without extension), not the
completed one with the extension.
There were issues with how the Penciller behaves under ehavy write
pressure - most particularly where there are a large number of keys per
update (i.e. 2i heavy objects). Most immediately the attempt to chekc
whether the l0 file was ready slowed down the process of producing the
L0 file - so back-pressure created more back-pressure.
Going forward want to alter this most significantly as also the work
queue can build up unsustainably. there needs to be some pausing
prompted by the bookie on 'returned', and the use of 'returend when the
work queue exceeds a threshold.
Added a test of journal compaction with a registered snapshot and it
showed that the deleting of files did not correctly check the list of
registerd snapshots. Corrected.
This exposed a potential issue with not opening readers in binary_mode -
so now defaults to binary mode. Will add test using object filder to
confirm values remain readable in rolled journals after
shutdown/startup.