node_worker_pool
Having a separate vnode_worker_pool wouldn't reoslve he parallelism issue obviously! Need a node_worker_pool instead.
This commit is contained in:
parent
cb5f09496f
commit
2ad1ac0baf
1 changed files with 1 additions and 1 deletions
|
@ -270,7 +270,7 @@ Some notes on re-using this alternative anti-entropy mechanism within Riak:
|
||||||
```
|
```
|
||||||
- Likewise with bitcask, it currently is async with the snapshot effectively inside of the async folder function returned (for bitcask it opens a new bitcask store in read-only mode), and this could be done outside. This could be moved outside of the async part but, unlike with leveldb and leveled snapshots this is a relatively expensive operation - so this would block the main bitcask process in an unhealthy way. So finding a simple way of snapshotting prior to the fold and outside of the async process would require more work in Bitcask.
|
- Likewise with bitcask, it currently is async with the snapshot effectively inside of the async folder function returned (for bitcask it opens a new bitcask store in read-only mode), and this could be done outside. This could be moved outside of the async part but, unlike with leveldb and leveled snapshots this is a relatively expensive operation - so this would block the main bitcask process in an unhealthy way. So finding a simple way of snapshotting prior to the fold and outside of the async process would require more work in Bitcask.
|
||||||
|
|
||||||
- riak_core supports vnode_worker_pools (currently only one) and riak_kv sets up a pool for folds. If riak_core were be changed to support more than one pool, a second pool could be setup for snapped folds (i.e. where the response is {snap_async, Work, From, NewModState} as opposed to [async](https://github.com/basho/riak_core/blob/2.1.8/src/riak_core_vnode.erl#L358-#L362), the second vnode_worker_pool would be asked to fulfill this work). The second pool could have a more constrained number of concurrent workers - so these large folds could have concurrency throttled, without a timing impact on the consistency of the results across vnodes.
|
- riak_core supports vnode_worker_pools (currently only one) and riak_kv sets up a pool for folds. The potential may also exist to have a node_worker_pool on each node. It may then be possible to divert snapped async work to this pool (i.e. where the response is {snap_async, Work, From, NewModState} as opposed to [async](https://github.com/basho/riak_core/blob/2.1.8/src/riak_core_vnode.erl#L358-#L362), the node_worker_pool would be asked to fulfill this work). The second pool could have a more constrained number of concurrent workers, perhaps just one. Therefore no more than one vnode on the node would be active doing this sort of work at any one time, and when that work is finished the next vnode in the queue would pick up and commence its fold.
|
||||||
|
|
||||||
* In Leveled a special fold currently supports the Tic-Tac tree generation for indexes, and one for objects. It may be better to support this through a offering a more open capability to pass different fold functions and accumulators into index folds. This could be re-used for "reporting indexes", where we want to count terms of different types rather than return all those terms via an accumulating list e.g. an index may have a bitmap style part, and the function will apply a wildcard mask to the bitmap and count the number of hits against each possible output.
|
* In Leveled a special fold currently supports the Tic-Tac tree generation for indexes, and one for objects. It may be better to support this through a offering a more open capability to pass different fold functions and accumulators into index folds. This could be re-used for "reporting indexes", where we want to count terms of different types rather than return all those terms via an accumulating list e.g. an index may have a bitmap style part, and the function will apply a wildcard mask to the bitmap and count the number of hits against each possible output.
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue