From 022997b2e673383642e7e16b358da8acb12ed951 Mon Sep 17 00:00:00 2001 From: Martin Sumner Date: Fri, 8 Jun 2018 12:34:10 +0100 Subject: [PATCH] Coverage issue The scan_table situation where the query needs to be start inclusive, was consistently getting coverage. It was less likely to get coverage with smaller cache sizes. It is not clear why this wasn't being triggered before. Perhaps because of the erroneous jitter setting? Multiple cache sizes now tested to try and make sure the test is always in-line with expectations. --- src/leveled_bookie.erl | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/src/leveled_bookie.erl b/src/leveled_bookie.erl index b8979cf..2a41ee5 100644 --- a/src/leveled_bookie.erl +++ b/src/leveled_bookie.erl @@ -1971,10 +1971,16 @@ foldobjects_vs_foldheads_bybucket_test_() -> {timeout, 60, fun foldobjects_vs_foldheads_bybucket_testto/0}. foldobjects_vs_foldheads_bybucket_testto() -> + folder_cache_test(10), + folder_cache_test(100), + folder_cache_test(300), + folder_cache_test(1000). + +folder_cache_test(CacheSize) -> RootPath = reset_filestructure(), {ok, Bookie1} = book_start([{root_path, RootPath}, {max_journalsize, 1000000}, - {cache_size, 500}]), + {cache_size, CacheSize}]), ObjL1 = generate_multiple_objects(400, 1), ObjL2 = generate_multiple_objects(400, 1), % Put in all the objects with a TTL in the future