On Thu, Oct 13, 2016 at 8:48 AM, Amit Kapila <amit.kapil...@gmail.com> wrote:
> As of now, the driving table for parallel query is accessed by
> parallel sequential scan which limits its usage to a certain degree.
> Parallelising index scans would further increase the usage of parallel
> query in many more cases. This patch enables the parallelism for the
> btree scans. Supporting parallel index scan for other index types
> like hash, gist, spgist can be done as separate patches.
I would like to have an input on the method of selecting parallel
workers for scanning index. Currently the patch selects number of
workers based on size of index relation and the upper limit of
parallel workers is max_parallel_workers_per_gather. This is quite
similar to what we do for parallel sequential scan except for the fact
that in parallel seq. scan, we use the parallel_workers option if
provided by user during Create Table. User can provide
parallel_workers option as below:
Create Table .... With (parallel_workers = 4);
Is it desirable to have similar option for parallel index scans, if
yes then what should be the interface for same? One possible way
could be to allow user to provide it during Create Index as below:
Create Index .... With (parallel_workers = 4);
If above syntax looks sensible, then we might need to think what
should be used for parallel index build. It seems to me that parallel
tuple sort patch  proposed by Peter G. is using above syntax for
getting the parallel workers input from user for parallel index
Another point which needs some thoughts is whether it is good idea to
use index relation size to calculate parallel workers for index scan.
I think ideally for index scans it should be based on number of pages
to be fetched/scanned from index.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: