Re: [HACKERS] why not parallel seq scan for slow functions
On Thu, Nov 9, 2017 at 3:47 AM, Amit Kapilawrote: > I think I understood your concern after some offlist discussion and it > is primarily due to the inheritance related check which can skip the > generation of gather paths when it shouldn't. So what might fit > better here is a straight check on the number of base rels such that > allow generating gather path in set_rel_pathlist, if there are > multiple baserels involved. I have used all_baserels which I think > will work better for this purpose. Yes, that looks a lot more likely to be correct. Let's see what Tom thinks. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Nov 8, 2017 at 6:48 PM, Robert Haaswrote: > On Wed, Nov 8, 2017 at 7:26 AM, Amit Kapila wrote: >> We do want to generate it later when there isn't inheritance involved, >> but only if there is a single rel involved (simple_rel_array_size >> <=2). The rule is something like this, we will generate the gather >> paths at this stage only if there are more than two rels involved and >> there isn't inheritance involved. > > Why is that the correct rule? > > Sorry if I'm being dense here. I would have thought we'd want to skip > it for the topmost scan/join rel regardless of the presence or absence > of inheritance. > I think I understood your concern after some offlist discussion and it is primarily due to the inheritance related check which can skip the generation of gather paths when it shouldn't. So what might fit better here is a straight check on the number of base rels such that allow generating gather path in set_rel_pathlist, if there are multiple baserels involved. I have used all_baserels which I think will work better for this purpose. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com parallel_paths_include_tlist_cost_v6.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Nov 8, 2017 at 7:26 AM, Amit Kapilawrote: > We do want to generate it later when there isn't inheritance involved, > but only if there is a single rel involved (simple_rel_array_size > <=2). The rule is something like this, we will generate the gather > paths at this stage only if there are more than two rels involved and > there isn't inheritance involved. Why is that the correct rule? Sorry if I'm being dense here. I would have thought we'd want to skip it for the topmost scan/join rel regardless of the presence or absence of inheritance. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Nov 8, 2017 at 4:34 PM, Robert Haaswrote: > On Tue, Nov 7, 2017 at 9:41 PM, Amit Kapila wrote: >> This is required to prohibit generating gather path for top rel in >> case of inheritence (Append node) at this place (we want to generate >> it later when scan/join target is available). > > OK, but why don't we want to generate it later when there isn't > inheritance involved? > We do want to generate it later when there isn't inheritance involved, but only if there is a single rel involved (simple_rel_array_size <=2). The rule is something like this, we will generate the gather paths at this stage only if there are more than two rels involved and there isn't inheritance involved. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Nov 7, 2017 at 9:41 PM, Amit Kapilawrote: > This is required to prohibit generating gather path for top rel in > case of inheritence (Append node) at this place (we want to generate > it later when scan/join target is available). OK, but why don't we want to generate it later when there isn't inheritance involved? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Nov 8, 2017 at 2:51 AM, Robert Haaswrote: > On Mon, Nov 6, 2017 at 9:57 PM, Amit Kapila wrote: > >>> Also, even if inheritance is used, we might still be the >>> topmost scan/join target. >> >> Sure, but in that case, it won't generate the gather path here (due to >> this part of check "!root->append_rel_list"). I am not sure whether I >> have understood the second part of your question, so if my answer >> appears inadequate, then can you provide more details on what you are >> concerned about? > > I don't know why the question of why root->append_rel_list is empty is > the relevant thing here for deciding whether to generate gather paths > at this point. > This is required to prohibit generating gather path for top rel in case of inheritence (Append node) at this place (we want to generate it later when scan/join target is available). For such a case, the reloptkind will be RELOPT_BASEREL and simple_rel_array_size will be greater than two as it includes child rels as well. So, the check for root->append_rel_list will prohibit generating gather path for such a rel. Now, for all the child rels of Append, the reloptkind will be RELOPT_OTHER_MEMBER_REL, so it won't generate gather path here because of the first part of check (rel->reloptkind == RELOPT_BASEREL). -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Nov 6, 2017 at 9:57 PM, Amit Kapilawrote: >> Well, I suppose that test will fire for a baserel when the total >> number of baserels is at least 3 and there's no inheritance involved. >> But if there are 2 baserels, we're still not the topmost scan/join >> target. > > No, if there are 2 baserels, then simple_rel_array_size will be 3. > The simple_rel_array_size is always the "number of relations" plus > "one". See setup_simple_rel_arrays. Oh, wow. That's surprising. >> Also, even if inheritance is used, we might still be the >> topmost scan/join target. > > Sure, but in that case, it won't generate the gather path here (due to > this part of check "!root->append_rel_list"). I am not sure whether I > have understood the second part of your question, so if my answer > appears inadequate, then can you provide more details on what you are > concerned about? I don't know why the question of why root->append_rel_list is empty is the relevant thing here for deciding whether to generate gather paths at this point. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Nov 6, 2017 at 7:05 PM, Robert Haaswrote: > On Mon, Nov 6, 2017 at 11:20 AM, Amit Kapila wrote: >> On Mon, Nov 6, 2017 at 3:51 AM, Robert Haas wrote: >>> This looks like it's on the right track to me. I hope Tom will look >>> into it, but if he doesn't I may try to get it committed myself. >>> >>> -if (rel->reloptkind == RELOPT_BASEREL) >>> -generate_gather_paths(root, rel); >>> +if (rel->reloptkind == RELOPT_BASEREL && >>> +root->simple_rel_array_size > 2 && >>> +!root->append_rel_list) >>> >>> This test doesn't look correct to me. Actually, it doesn't look >>> anywhere close to correct to me. So, one of us is very confused... >>> not sure whether it's you or me. >>> >> It is quite possible that I haven't got it right, but it shouldn't be >> completely bogus as it stands the regression tests and some manual >> verification. Can you explain what is your concern about this test? > > Well, I suppose that test will fire for a baserel when the total > number of baserels is at least 3 and there's no inheritance involved. > But if there are 2 baserels, we're still not the topmost scan/join > target. > No, if there are 2 baserels, then simple_rel_array_size will be 3. The simple_rel_array_size is always the "number of relations" plus "one". See setup_simple_rel_arrays. > Also, even if inheritance is used, we might still be the > topmost scan/join target. > Sure, but in that case, it won't generate the gather path here (due to this part of check "!root->append_rel_list"). I am not sure whether I have understood the second part of your question, so if my answer appears inadequate, then can you provide more details on what you are concerned about? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Nov 6, 2017 at 11:20 AM, Amit Kapilawrote: > On Mon, Nov 6, 2017 at 3:51 AM, Robert Haas wrote: >> This looks like it's on the right track to me. I hope Tom will look >> into it, but if he doesn't I may try to get it committed myself. >> >> -if (rel->reloptkind == RELOPT_BASEREL) >> -generate_gather_paths(root, rel); >> +if (rel->reloptkind == RELOPT_BASEREL && >> +root->simple_rel_array_size > 2 && >> +!root->append_rel_list) >> >> This test doesn't look correct to me. Actually, it doesn't look >> anywhere close to correct to me. So, one of us is very confused... >> not sure whether it's you or me. >> > It is quite possible that I haven't got it right, but it shouldn't be > completely bogus as it stands the regression tests and some manual > verification. Can you explain what is your concern about this test? Well, I suppose that test will fire for a baserel when the total number of baserels is at least 3 and there's no inheritance involved. But if there are 2 baserels, we're still not the topmost scan/join target. Also, even if inheritance is used, we might still be the topmost scan/join target. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Nov 6, 2017 at 3:51 AM, Robert Haaswrote: > On Sun, Nov 5, 2017 at 12:57 AM, Amit Kapila wrote: >> Thanks for the confirmation. Find rebased patch attached. > > This looks like it's on the right track to me. I hope Tom will look > into it, but if he doesn't I may try to get it committed myself. > > -if (rel->reloptkind == RELOPT_BASEREL) > -generate_gather_paths(root, rel); > +if (rel->reloptkind == RELOPT_BASEREL && > +root->simple_rel_array_size > 2 && > +!root->append_rel_list) > > This test doesn't look correct to me. Actually, it doesn't look > anywhere close to correct to me. So, one of us is very confused... > not sure whether it's you or me. > It is quite possible that I haven't got it right, but it shouldn't be completely bogus as it stands the regression tests and some manual verification. Can you explain what is your concern about this test? > simple_gather_path = (Path *) > create_gather_path(root, rel, cheapest_partial_path, rel->reltarget, > NULL, NULL); > + > +/* Add projection step if needed */ > +if (target && simple_gather_path->pathtarget != target) > +simple_gather_path = apply_projection_to_path(root, rel, > simple_gather_path, target); > > Instead of using apply_projection_to_path, why not pass the correct > reltarget to create_gather_path? > We need to push it to gather's subpath as is done in apply_projection_to_path and then we have to cost it accordingly. I think if we don't use apply_projection_to_path then we might end up with much of the code similar to it in generate_gather_paths. In fact, I have tried something similar to what you are suggesting in the first version of patch [1] and it didn't turn out to be clean. Also, I think we already do something similar in create_ordered_paths. > +/* Set or update cheapest_total_path and related fields */ > +set_cheapest(current_rel); > > I wonder if it's really OK to call set_cheapest() a second time for > the same relation... > I think if we want we can avoid it by checking whether we have generated any gather path for the relation (basically, check if it has partial path list). Another idea could be that we consider the generation of gather/gathermerge for top-level scan/join relation as a separate step and generate a new kind of upper rel for it which will be a mostly dummy but will have paths for gather/gathermerge. [1] - https://www.postgresql.org/message-id/CAA4eK1JUvL9WS9z%3D5hjSuSMNCo3TdBxFa0pA%3DE__E%3Dp6iUffUQ%40mail.gmail.com -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
Robert Haaswrites: > This looks like it's on the right track to me. I hope Tom will look > into it, but if he doesn't I may try to get it committed myself. I do plan to take a look at it during this CF. > +/* Set or update cheapest_total_path and related fields */ > +set_cheapest(current_rel); > I wonder if it's really OK to call set_cheapest() a second time for > the same relation... It's safe enough, we do it in some places already when converting a relation to dummy. But having to do that in a normal code path suggests that something's not right about the design ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Sun, Nov 5, 2017 at 12:57 AM, Amit Kapilawrote: > Thanks for the confirmation. Find rebased patch attached. This looks like it's on the right track to me. I hope Tom will look into it, but if he doesn't I may try to get it committed myself. -if (rel->reloptkind == RELOPT_BASEREL) -generate_gather_paths(root, rel); +if (rel->reloptkind == RELOPT_BASEREL && +root->simple_rel_array_size > 2 && +!root->append_rel_list) This test doesn't look correct to me. Actually, it doesn't look anywhere close to correct to me. So, one of us is very confused... not sure whether it's you or me. simple_gather_path = (Path *) create_gather_path(root, rel, cheapest_partial_path, rel->reltarget, NULL, NULL); + +/* Add projection step if needed */ +if (target && simple_gather_path->pathtarget != target) +simple_gather_path = apply_projection_to_path(root, rel, simple_gather_path, target); Instead of using apply_projection_to_path, why not pass the correct reltarget to create_gather_path? +/* Set or update cheapest_total_path and related fields */ +set_cheapest(current_rel); I wonder if it's really OK to call set_cheapest() a second time for the same relation... -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Thu, Sep 21, 2017 at 2:35 AM, Jeff Janeswrote: > On Tue, Sep 19, 2017 at 9:15 PM, Amit Kapila > wrote: >> >> On Wed, Sep 20, 2017 at 3:05 AM, Jeff Janes wrote: >> > On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro >> > wrote: >> >> >> >> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila >> >> wrote: >> >> > The attached patch fixes both the review comments as discussed above. >> > >> > >> > that should be fixed by turning costs on the explain, as is the >> > tradition. >> > >> >> Right. BTW, did you get a chance to run the original test (for which >> you have reported the problem) with this patch? > > > Yes, this patch makes it use a parallel scan, with great improvement. > Thanks for the confirmation. Find rebased patch attached. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com parallel_paths_include_tlist_cost_v5.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Sep 19, 2017 at 9:15 PM, Amit Kapilawrote: > On Wed, Sep 20, 2017 at 3:05 AM, Jeff Janes wrote: > > On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro > > wrote: > >> > >> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila > >> wrote: > >> > The attached patch fixes both the review comments as discussed above. > > > > > > that should be fixed by turning costs on the explain, as is the > tradition. > > > > Right. BTW, did you get a chance to run the original test (for which > you have reported the problem) with this patch? > Yes, this patch makes it use a parallel scan, with great improvement. No more having to \copy the data out, then run GNU split, then run my perl or python program with GNU parallel on each file. Instead I just have to put a pl/perl wrapper around the function. (next up, how to put a "create temp table alsdkfjaslfdj as" in front of it and keep it running in parallel) Thanks, Jeff
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Sep 20, 2017 at 3:05 AM, Jeff Janeswrote: > On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro > wrote: >> >> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila >> wrote: >> > The attached patch fixes both the review comments as discussed above. > > > that should be fixed by turning costs on the explain, as is the tradition. > Right. BTW, did you get a chance to run the original test (for which you have reported the problem) with this patch? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munrowrote: > On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila > wrote: > > The attached patch fixes both the review comments as discussed above. > > This cost stuff looks unstable: > > test select_parallel ... FAILED > > ! Gather (cost=0.00..623882.94 rows=9976 width=8) > Workers Planned: 4 > !-> Parallel Seq Scan on tenk1 (cost=0.00..623882.94 rows=2494 > width=8) > (3 rows) > > drop function costly_func(var1 integer); > --- 112,120 > explain select ten, costly_func(ten) from tenk1; >QUERY PLAN > > > ! Gather (cost=0.00..625383.00 rows=1 width=8) > Workers Planned: 4 > !-> Parallel Seq Scan on tenk1 (cost=0.00..625383.00 rows=2500 > width=8) > (3 rows) > that should be fixed by turning costs on the explain, as is the tradition. See attached. Cheers, Jeff parallel_paths_include_tlist_cost_v4.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapilawrote: > The attached patch fixes both the review comments as discussed above. This cost stuff looks unstable: test select_parallel ... FAILED ! Gather (cost=0.00..623882.94 rows=9976 width=8) Workers Planned: 4 !-> Parallel Seq Scan on tenk1 (cost=0.00..623882.94 rows=2494 width=8) (3 rows) drop function costly_func(var1 integer); --- 112,120 explain select ten, costly_func(ten) from tenk1; QUERY PLAN ! Gather (cost=0.00..625383.00 rows=1 width=8) Workers Planned: 4 !-> Parallel Seq Scan on tenk1 (cost=0.00..625383.00 rows=2500 width=8) (3 rows) drop function costly_func(var1 integer); >From https://travis-ci.org/postgresql-cfbot/postgresql/builds/277376953 . -- Thomas Munro http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Sep 13, 2017 at 9:39 AM, Amit Kapilawrote: > On Tue, Sep 12, 2017 at 5:47 PM, Amit Khandekar > wrote: >> On 5 September 2017 at 14:04, Amit Kapila wrote: >> >> I started with a quick review ... a couple of comments below : >> >> - * If this is a baserel, consider gathering any partial paths we may have >> - * created for it. (If we tried to gather inheritance children, we could >> + * If this is a baserel and not the only rel, consider gathering any >> + * partial paths we may have created for it. (If we tried to gather >> >> /* Create GatherPaths for any useful partial paths for rel */ >> - generate_gather_paths(root, rel); >> + if (lev < levels_needed) >> + generate_gather_paths(root, rel, NULL); >> >> I think at the above two places, and may be in other place also, it's >> better to mention the reason why we should generate the gather path >> only if it's not the only rel. >> > > I think the comment you are looking is present where we are calling > generate_gather_paths in grouping_planner. Instead of adding same or > similar comment at multiple places, how about if we just say something > like "See in grouping_planner where we generate gather paths" at all > other places? > >> -- >> >> - if (rel->reloptkind == RELOPT_BASEREL) >> - generate_gather_paths(root, rel); >> + if (rel->reloptkind == RELOPT_BASEREL && >> root->simple_rel_array_size > 2) >> + generate_gather_paths(root, rel, NULL); >> >> Above, in case it's a partitioned table, root->simple_rel_array_size >> includes the child rels. So even if it's a simple select without a >> join rel, simple_rel_array_size would be > 2, and so gather path would >> be generated here for the root table, and again in grouping_planner(). >> > > Yeah, that could be a problem. I think we should ensure that there is > no append rel list by checking root->append_rel_list. Can you think > of a better way to handle it? > The attached patch fixes both the review comments as discussed above. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com parallel_paths_include_tlist_cost_v3.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Sep 12, 2017 at 5:47 PM, Amit Khandekarwrote: > On 5 September 2017 at 14:04, Amit Kapila wrote: > > I started with a quick review ... a couple of comments below : > > - * If this is a baserel, consider gathering any partial paths we may have > - * created for it. (If we tried to gather inheritance children, we could > + * If this is a baserel and not the only rel, consider gathering any > + * partial paths we may have created for it. (If we tried to gather > > /* Create GatherPaths for any useful partial paths for rel */ > - generate_gather_paths(root, rel); > + if (lev < levels_needed) > + generate_gather_paths(root, rel, NULL); > > I think at the above two places, and may be in other place also, it's > better to mention the reason why we should generate the gather path > only if it's not the only rel. > I think the comment you are looking is present where we are calling generate_gather_paths in grouping_planner. Instead of adding same or similar comment at multiple places, how about if we just say something like "See in grouping_planner where we generate gather paths" at all other places? > -- > > - if (rel->reloptkind == RELOPT_BASEREL) > - generate_gather_paths(root, rel); > + if (rel->reloptkind == RELOPT_BASEREL && > root->simple_rel_array_size > 2) > + generate_gather_paths(root, rel, NULL); > > Above, in case it's a partitioned table, root->simple_rel_array_size > includes the child rels. So even if it's a simple select without a > join rel, simple_rel_array_size would be > 2, and so gather path would > be generated here for the root table, and again in grouping_planner(). > Yeah, that could be a problem. I think we should ensure that there is no append rel list by checking root->append_rel_list. Can you think of a better way to handle it? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On 5 September 2017 at 14:04, Amit Kapilawrote: > On Fri, Aug 25, 2017 at 10:08 PM, Robert Haas wrote: >> On Mon, Aug 21, 2017 at 5:08 AM, Amit Kapila wrote: >>> (b) I have changed the costing of gather path for path target in >>> generate_gather_paths which I am not sure is the best way. Another >>> possibility could have been that I change the code in >>> apply_projection_to_path as done in the previous patch and just call >>> it from generate_gather_paths. I have not done that because of your >>> comment above thread ("is probably unsafe, because it might confuse >>> code that reaches the modified-in-place path through some other >>> pointer (e.g. code which expects the RelOptInfo's paths to still be >>> sorted by cost)."). It is not clear to me what exactly is bothering >>> you if we directly change costing in apply_projection_to_path. >> >> The point I was trying to make is that if you retroactively change the >> cost of a path after you've already done add_path(), it's too late to >> change your mind about whether to keep the path. At least according >> to my current understanding, that's the root of this problem in the >> first place. On top of that, add_path() and other routines that deal >> with RelOptInfo path lists expect surviving paths to be ordered by >> descending cost; if you frob the cost, they might not be properly >> ordered any more. >> > > Okay, now I understand your point, but I think we already change the > cost of paths in apply_projection_to_path which is done after add_path > for top level scan/join paths. I think this matters a lot in case of > Gather because the cost of computing target list can be divided among > workers. I have changed the patch such that parallel paths for top > level scan/join node will be generated after pathtarget is ready. I > had kept the costing of path targets local to > apply_projection_to_path() as that makes the patch simple. I started with a quick review ... a couple of comments below : - * If this is a baserel, consider gathering any partial paths we may have - * created for it. (If we tried to gather inheritance children, we could + * If this is a baserel and not the only rel, consider gathering any + * partial paths we may have created for it. (If we tried to gather /* Create GatherPaths for any useful partial paths for rel */ - generate_gather_paths(root, rel); + if (lev < levels_needed) + generate_gather_paths(root, rel, NULL); I think at the above two places, and may be in other place also, it's better to mention the reason why we should generate the gather path only if it's not the only rel. -- - if (rel->reloptkind == RELOPT_BASEREL) - generate_gather_paths(root, rel); + if (rel->reloptkind == RELOPT_BASEREL && root->simple_rel_array_size > 2) + generate_gather_paths(root, rel, NULL); Above, in case it's a partitioned table, root->simple_rel_array_size includes the child rels. So even if it's a simple select without a join rel, simple_rel_array_size would be > 2, and so gather path would be generated here for the root table, and again in grouping_planner(). -- Thanks, -Amit Khandekar EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
Robert Haaswrites: > On Wed, Sep 6, 2017 at 3:41 PM, Tom Lane wrote: >> I'm not entirely following. I thought that add_path was set up to treat >> "can be parallelized" as an independent dimension of merit, so that >> parallel paths would always survive. > Here, the Gather path is not parallel-safe, but rather > parallel-restricted: Ah, right, the problem is with the Gather not its sub-paths. >> Might be a tad messy to rearrange things that way. > Why do you think I wanted you to do it? :-) I'll think about it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Sep 6, 2017 at 3:41 PM, Tom Lanewrote: > Robert Haas writes: >> In particular, as Jeff and Amit point out, it >> may well be that (a) before apply_projection_to_path(), the cheapest >> plan is non-parallel and (b) after apply_projection_to_path(), the >> cheapest plan would be a Gather plan, except that it's too late >> because we've already thrown that path out. > > I'm not entirely following. I thought that add_path was set up to treat > "can be parallelized" as an independent dimension of merit, so that > parallel paths would always survive. It treats parallel-safety as an independent dimension of merit; a parallel-safe plan is more meritorious than one of equal cost which is not. We need that so that because, for example, forming a partial path for a join means joining a partial path to a parallel-safe path. But that doesn't help us here; that's to make sure we can build the necessary stuff *below* the Gather. IOW, if we threw away parallel-safe paths because there was a cheaper parallel-restricted path, we might be unable to build a partial path for the join *at all*. Here, the Gather path is not parallel-safe, but rather parallel-restricted: it's OK for it to exist in a plan that uses parallelism (duh), but it can't be nested under another Gather (also duh, kinda). So before accounting for the differing projection cost, the Gather path is doubly inferior: it is more expensive AND not parallel-safe, whereas the competing non-parallel plan is both cheaper AND parallel-safe. After applying the expensive target list, the parallel-safe plan gets a lot more expensive, but the Gather path gets more expensive to a lesser degree because the projection step ends up below the Gather and thus happens in parallel, so now the Gather plan, still a loser on parallel-safety, is a winner on total cost and thus ought to have been retained and, in fact, ought to have won. Instead, we threw it out too early. >> What we ought to do, I think, is avoid generating gather paths until >> after we've applied the target list (and the associated costing >> changes) to both the regular path list and the partial path list. > > Might be a tad messy to rearrange things that way. Why do you think I wanted you to do it? :-) -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
Robert Haaswrites: > In particular, as Jeff and Amit point out, it > may well be that (a) before apply_projection_to_path(), the cheapest > plan is non-parallel and (b) after apply_projection_to_path(), the > cheapest plan would be a Gather plan, except that it's too late > because we've already thrown that path out. I'm not entirely following. I thought that add_path was set up to treat "can be parallelized" as an independent dimension of merit, so that parallel paths would always survive. > What we ought to do, I think, is avoid generating gather paths until > after we've applied the target list (and the associated costing > changes) to both the regular path list and the partial path list. Might be a tad messy to rearrange things that way. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Sep 6, 2017 at 3:18 PM, Tom Lanewrote: > Robert Haas writes: >> On Wed, Sep 6, 2017 at 1:47 PM, Tom Lane wrote: >>> If somebody's applying apply_projection_to_path to a path that's already >>> been add_path'd, that's a violation of the documented restriction. > >> /me is confused. Isn't that exactly what grouping_planner() is doing, >> and has done ever since your original pathification commit >> (3fc6e2d7f5b652b417fa6937c34de2438d60fa9f)? It's iterating over >> current_rel->pathlist, so surely everything in there has been >> add_path()'d. > > I think the assumption there is that we no longer care about validity of > the input Relation, since we won't be looking at it any more (and > certainly not adding more paths to it). If there's some reason why > that's not true, then maybe grouping_planner has a bug there. Right, that's sorta what I assumed. But I think that thinking is flawed in the face of parallel query, because of the fact that apply_projection_to_path() pushes down target list projection below Gather when possible. In particular, as Jeff and Amit point out, it may well be that (a) before apply_projection_to_path(), the cheapest plan is non-parallel and (b) after apply_projection_to_path(), the cheapest plan would be a Gather plan, except that it's too late because we've already thrown that path out. What we ought to do, I think, is avoid generating gather paths until after we've applied the target list (and the associated costing changes) to both the regular path list and the partial path list. Then the cost comparison is apples-to-apples. The use of apply_projection_to_path() on every path in the pathlist would be fine if it were adjusting all the costs by a uniform amount, but it isn't. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
Robert Haaswrites: > On Wed, Sep 6, 2017 at 1:47 PM, Tom Lane wrote: >> If somebody's applying apply_projection_to_path to a path that's already >> been add_path'd, that's a violation of the documented restriction. > /me is confused. Isn't that exactly what grouping_planner() is doing, > and has done ever since your original pathification commit > (3fc6e2d7f5b652b417fa6937c34de2438d60fa9f)? It's iterating over > current_rel->pathlist, so surely everything in there has been > add_path()'d. I think the assumption there is that we no longer care about validity of the input Relation, since we won't be looking at it any more (and certainly not adding more paths to it). If there's some reason why that's not true, then maybe grouping_planner has a bug there. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Sep 6, 2017 at 1:47 PM, Tom Lanewrote: > Robert Haas writes: >> On Tue, Sep 5, 2017 at 4:34 AM, Amit Kapila wrote: >>> Okay, now I understand your point, but I think we already change the >>> cost of paths in apply_projection_to_path which is done after add_path >>> for top level scan/join paths. > >> Yeah. I think that's a nasty hack, and I think it's Tom's fault. :-) > > Yeah, and it's also documented: > > * This has the same net effect as create_projection_path(), except that if > * a separate Result plan node isn't needed, we just replace the given path's > * pathtarget with the desired one. This must be used only when the caller > * knows that the given path isn't referenced elsewhere and so can be modified > * in-place. > > If somebody's applying apply_projection_to_path to a path that's already > been add_path'd, that's a violation of the documented restriction. /me is confused. Isn't that exactly what grouping_planner() is doing, and has done ever since your original pathification commit (3fc6e2d7f5b652b417fa6937c34de2438d60fa9f)? It's iterating over current_rel->pathlist, so surely everything in there has been add_path()'d. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
Robert Haaswrites: > On Tue, Sep 5, 2017 at 4:34 AM, Amit Kapila wrote: >> Okay, now I understand your point, but I think we already change the >> cost of paths in apply_projection_to_path which is done after add_path >> for top level scan/join paths. > Yeah. I think that's a nasty hack, and I think it's Tom's fault. :-) Yeah, and it's also documented: * This has the same net effect as create_projection_path(), except that if * a separate Result plan node isn't needed, we just replace the given path's * pathtarget with the desired one. This must be used only when the caller * knows that the given path isn't referenced elsewhere and so can be modified * in-place. If somebody's applying apply_projection_to_path to a path that's already been add_path'd, that's a violation of the documented restriction. It might be that we should just get rid of apply_projection_to_path and use create_projection_path, which is less mistake-prone at the cost of manufacturing another level of Path node. Now that that has the dummypp flag, it really shouldn't make any difference in terms of the accuracy of the cost estimates. > I'd feel a lot happier if Tom were to decide how this ought to be > fixed, because - in spite of some modifications by various parallel > query code - this is basically all his design and mostly his code. I can take a look, but not right away. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Sep 5, 2017 at 4:34 AM, Amit Kapilawrote: > Okay, now I understand your point, but I think we already change the > cost of paths in apply_projection_to_path which is done after add_path > for top level scan/join paths. Yeah. I think that's a nasty hack, and I think it's Tom's fault. :-) It's used in various places with comments like this: /* * The path might not return exactly what we want, so fix that. (We * assume that this won't change any conclusions about which was the * cheapest path.) */ And in another place: * In principle we should re-run set_cheapest() here to identify the * cheapest path, but it seems unlikely that adding the same tlist * eval costs to all the paths would change that, so we don't bother. I think these assumptions were a little shaky even before parallel query came along, but they're now outright false, because we're not adding the *same* tlist eval costs to all paths any more. The parallel paths are getting smaller costs. That probably doesn't matter much if the expressions in questions are things like a + b, but when as in Jeff's example it's slow(a), then it matters a lot. I'd feel a lot happier if Tom were to decide how this ought to be fixed, because - in spite of some modifications by various parallel query code - this is basically all his design and mostly his code. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Fri, Aug 25, 2017 at 10:08 PM, Robert Haaswrote: > On Mon, Aug 21, 2017 at 5:08 AM, Amit Kapila wrote: >> (b) I have changed the costing of gather path for path target in >> generate_gather_paths which I am not sure is the best way. Another >> possibility could have been that I change the code in >> apply_projection_to_path as done in the previous patch and just call >> it from generate_gather_paths. I have not done that because of your >> comment above thread ("is probably unsafe, because it might confuse >> code that reaches the modified-in-place path through some other >> pointer (e.g. code which expects the RelOptInfo's paths to still be >> sorted by cost)."). It is not clear to me what exactly is bothering >> you if we directly change costing in apply_projection_to_path. > > The point I was trying to make is that if you retroactively change the > cost of a path after you've already done add_path(), it's too late to > change your mind about whether to keep the path. At least according > to my current understanding, that's the root of this problem in the > first place. On top of that, add_path() and other routines that deal > with RelOptInfo path lists expect surviving paths to be ordered by > descending cost; if you frob the cost, they might not be properly > ordered any more. > Okay, now I understand your point, but I think we already change the cost of paths in apply_projection_to_path which is done after add_path for top level scan/join paths. I think this matters a lot in case of Gather because the cost of computing target list can be divided among workers. I have changed the patch such that parallel paths for top level scan/join node will be generated after pathtarget is ready. I had kept the costing of path targets local to apply_projection_to_path() as that makes the patch simple. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com parallel_paths_include_tlist_cost_v2.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
Robert Haaswrites: > The point I was trying to make is that if you retroactively change the > cost of a path after you've already done add_path(), it's too late to > change your mind about whether to keep the path. At least according > to my current understanding, that's the root of this problem in the > first place. On top of that, add_path() and other routines that deal > with RelOptInfo path lists expect surviving paths to be ordered by > descending cost; if you frob the cost, they might not be properly > ordered any more. Hadn't been paying attention to this thread, but I happened to notice Robert's comment here, and I strongly agree: it is *not* cool to be changing a path's cost (or, really, anything else about it) after it's already been given to add_path. add_path has already made irreversible choices on the basis of what it was given. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Aug 21, 2017 at 5:08 AM, Amit Kapilawrote: > Thanks for acknowledging the idea. I have written a patch which > implements the above idea. At this stage, it is merely to move the > discussion forward. Few things which I am not entirely happy about > this patch are: > > (a) To skip generating gather path for top level scan node, I have > used the number of relations which has RelOptInfo, basically > simple_rel_array_size. Is there any problem with it or do you see any > better way? I'm not sure. > (b) I have changed the costing of gather path for path target in > generate_gather_paths which I am not sure is the best way. Another > possibility could have been that I change the code in > apply_projection_to_path as done in the previous patch and just call > it from generate_gather_paths. I have not done that because of your > comment above thread ("is probably unsafe, because it might confuse > code that reaches the modified-in-place path through some other > pointer (e.g. code which expects the RelOptInfo's paths to still be > sorted by cost)."). It is not clear to me what exactly is bothering > you if we directly change costing in apply_projection_to_path. The point I was trying to make is that if you retroactively change the cost of a path after you've already done add_path(), it's too late to change your mind about whether to keep the path. At least according to my current understanding, that's the root of this problem in the first place. On top of that, add_path() and other routines that deal with RelOptInfo path lists expect surviving paths to be ordered by descending cost; if you frob the cost, they might not be properly ordered any more. I don't really have time right now to give this patch the attention which it deserves; I can possibly come back to it at some future point, or maybe somebody else will have time to give it a look. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On 21 August 2017 at 11:42, Amit Kapilawrote: >> or of 2) >> treating cost = speed, so we actually reduce the cost of a parallel >> plan rather than increasing it so it is more likely to be picked. >> > > Yeah, this is what is being currently followed for costing of parallel > plans and this patch also tries to follow the same. OK, I understand this better now, thanks. -- Simon Riggshttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Aug 21, 2017 at 3:15 PM, Simon Riggswrote: > On 21 August 2017 at 10:08, Amit Kapila wrote: > >> Thoughts? > > This seems like a very basic problem for parallel queries. > > The problem seems to be that we are calculating the cost of the plan > rather than the speed of the plan. > > Clearly, a parallel task has a higher overall cost but a lower time to > complete if resources are available. > > We have the choice of 1) adding a new optimizable quantity, > I think this has the potential of making costing decisions difficult. I mean to say, if we include any such new parameter, then we need to consider that along with cost as we can't completely ignore the cost. > or of 2) > treating cost = speed, so we actually reduce the cost of a parallel > plan rather than increasing it so it is more likely to be picked. > Yeah, this is what is being currently followed for costing of parallel plans and this patch also tries to follow the same. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On 21 August 2017 at 10:08, Amit Kapilawrote: > Thoughts? This seems like a very basic problem for parallel queries. The problem seems to be that we are calculating the cost of the plan rather than the speed of the plan. Clearly, a parallel task has a higher overall cost but a lower time to complete if resources are available. We have the choice of 1) adding a new optimizable quantity, or of 2) treating cost = speed, so we actually reduce the cost of a parallel plan rather than increasing it so it is more likely to be picked. -- Simon Riggshttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Aug 16, 2017 at 5:04 PM, Robert Haaswrote: > On Wed, Aug 16, 2017 at 7:23 AM, Amit Kapila wrote: >> On Tue, Aug 15, 2017 at 7:15 PM, Robert Haas wrote: >>> On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapila >>> wrote: I think skipping a generation of gather paths for scan node or top level join node generated via standard_join_search seems straight forward, but skipping for paths generated via geqo seems to be tricky (See use of generate_gather_paths in merge_clump). Assuming, we find some way to skip it for top level scan/join node, I don't think that will be sufficient, we have some special way to push target list below Gather node in apply_projection_to_path, we need to move that part as well in generate_gather_paths. >>> >>> I don't think that can work, because at that point we don't know what >>> target list the upper node wants to impose. >>> >> >> I am suggesting to call generate_gather_paths just before we try to >> apply projection on paths in grouping_planner (file:planner.c; >> line:1787; commit:004a9702). Won't the target list for upper nodes be >> available at that point? > > Oh, yes. Apparently I misunderstood your proposal. > Thanks for acknowledging the idea. I have written a patch which implements the above idea. At this stage, it is merely to move the discussion forward. Few things which I am not entirely happy about this patch are: (a) To skip generating gather path for top level scan node, I have used the number of relations which has RelOptInfo, basically simple_rel_array_size. Is there any problem with it or do you see any better way? (b) I have changed the costing of gather path for path target in generate_gather_paths which I am not sure is the best way. Another possibility could have been that I change the code in apply_projection_to_path as done in the previous patch and just call it from generate_gather_paths. I have not done that because of your comment above thread ("is probably unsafe, because it might confuse code that reaches the modified-in-place path through some other pointer (e.g. code which expects the RelOptInfo's paths to still be sorted by cost)."). It is not clear to me what exactly is bothering you if we directly change costing in apply_projection_to_path. Thoughts? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com parallel_paths_include_tlist_cost_v1.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Fri, 18 Aug 2017 at 4:48 PM, Amit Kapilawrote: > On Thu, Aug 17, 2017 at 2:45 PM, Dilip Kumar > wrote: > > On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar > wrote: > >> > >> Either we can pass "num_gene" to merge_clump or we can store num_gene > >> in the root. And inside merge_clump we can check. Do you see some more > >> complexity? > >> > > I think something like that should work. Ok > > > > After putting some more thought I see one more problem but not sure > > whether we can solve it easily. Now, if we skip generating the gather > > path at top level node then our cost comparison while adding the > > element to pool will not be correct as we are skipping some of the > > paths (gather path). And, it's very much possible that the path1 is > > cheaper than path2 without adding gather on top of it but with gather, > > path2 can be cheaper. > > > > I think that should not matter because the costing of gather is mainly > based on a number of rows and that should be same for both path1 and > path2 in this case. Yeah, I think you are right. > > > > -- > With Regards, > Amit Kapila. > EnterpriseDB: http://www.enterprisedb.com > -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com
Re: [HACKERS] why not parallel seq scan for slow functions
On Thu, Aug 17, 2017 at 2:45 PM, Dilip Kumarwrote: > On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar wrote: >> >> Either we can pass "num_gene" to merge_clump or we can store num_gene >> in the root. And inside merge_clump we can check. Do you see some more >> complexity? >> I think something like that should work. > After putting some more thought I see one more problem but not sure > whether we can solve it easily. Now, if we skip generating the gather > path at top level node then our cost comparison while adding the > element to pool will not be correct as we are skipping some of the > paths (gather path). And, it's very much possible that the path1 is > cheaper than path2 without adding gather on top of it but with gather, > path2 can be cheaper. > I think that should not matter because the costing of gather is mainly based on a number of rows and that should be same for both path1 and path2 in this case. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumarwrote: > > Either we can pass "num_gene" to merge_clump or we can store num_gene > in the root. And inside merge_clump we can check. Do you see some more > complexity? > After putting some more thought I see one more problem but not sure whether we can solve it easily. Now, if we skip generating the gather path at top level node then our cost comparison while adding the element to pool will not be correct as we are skipping some of the paths (gather path). And, it's very much possible that the path1 is cheaper than path2 without adding gather on top of it but with gather, path2 can be cheaper. But, there is not an easy way to handle it because without targetlist we can not calculate the cost of the gather(which is the actual problem). -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Sat, Aug 12, 2017 at 6:48 PM, Amit Kapilawrote: > On Thu, Aug 10, 2017 at 1:07 AM, Robert Haas wrote: >> On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapila wrote: >>> Right. >>> > > I think skipping a generation of gather paths for scan node or top > level join node generated via standard_join_search seems straight > forward, but skipping for paths generated via geqo seems to be tricky > (See use of generate_gather_paths in merge_clump). Either we can pass "num_gene" to merge_clump or we can store num_gene in the root. And inside merge_clump we can check. Do you see some more complexity? if (joinrel) { /* Create GatherPaths for any useful partial paths for rel */ if (old_clump->size + new_clump->size < num_gene) generate_gather_paths(root, joinrel); } Assuming, we find > some way to skip it for top level scan/join node, I don't think that > will be sufficient, we have some special way to push target list below > Gather node in apply_projection_to_path, we need to move that part as > well in generate_gather_paths. > -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Aug 16, 2017 at 7:23 AM, Amit Kapilawrote: > On Tue, Aug 15, 2017 at 7:15 PM, Robert Haas wrote: >> On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapila wrote: >>> I think skipping a generation of gather paths for scan node or top >>> level join node generated via standard_join_search seems straight >>> forward, but skipping for paths generated via geqo seems to be tricky >>> (See use of generate_gather_paths in merge_clump). Assuming, we find >>> some way to skip it for top level scan/join node, I don't think that >>> will be sufficient, we have some special way to push target list below >>> Gather node in apply_projection_to_path, we need to move that part as >>> well in generate_gather_paths. >> >> I don't think that can work, because at that point we don't know what >> target list the upper node wants to impose. >> > > I am suggesting to call generate_gather_paths just before we try to > apply projection on paths in grouping_planner (file:planner.c; > line:1787; commit:004a9702). Won't the target list for upper nodes be > available at that point? Oh, yes. Apparently I misunderstood your proposal. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Aug 15, 2017 at 7:15 PM, Robert Haaswrote: > On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapila wrote: >> I think skipping a generation of gather paths for scan node or top >> level join node generated via standard_join_search seems straight >> forward, but skipping for paths generated via geqo seems to be tricky >> (See use of generate_gather_paths in merge_clump). Assuming, we find >> some way to skip it for top level scan/join node, I don't think that >> will be sufficient, we have some special way to push target list below >> Gather node in apply_projection_to_path, we need to move that part as >> well in generate_gather_paths. > > I don't think that can work, because at that point we don't know what > target list the upper node wants to impose. > I am suggesting to call generate_gather_paths just before we try to apply projection on paths in grouping_planner (file:planner.c; line:1787; commit:004a9702). Won't the target list for upper nodes be available at that point? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Sat, Aug 12, 2017 at 9:18 AM, Amit Kapilawrote: > I think skipping a generation of gather paths for scan node or top > level join node generated via standard_join_search seems straight > forward, but skipping for paths generated via geqo seems to be tricky > (See use of generate_gather_paths in merge_clump). Assuming, we find > some way to skip it for top level scan/join node, I don't think that > will be sufficient, we have some special way to push target list below > Gather node in apply_projection_to_path, we need to move that part as > well in generate_gather_paths. I don't think that can work, because at that point we don't know what target list the upper node wants to impose. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Thu, Aug 10, 2017 at 1:07 AM, Robert Haaswrote: > On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapila wrote: >> Right. >> >> I see two ways to include the cost of the target list for parallel >> paths before rejecting them (a) Don't reject parallel paths >> (Gather/GatherMerge) during add_path. This has the danger of path >> explosion. (b) In the case of parallel paths, somehow try to identify >> that path has a costly target list (maybe just check if the target >> list has anything other than vars) and use it as a heuristic to decide >> that whether a parallel path can be retained. > > I think the right approach to this problem is to get the cost of the > GatherPath correct when it's initially created. The proposed patch > changes the cost after-the-fact, but that (1) doesn't prevent a > promising path from being rejected before we reach this point and (2) > is probably unsafe, because it might confuse code that reaches the > modified-in-place path through some other pointer (e.g. code which > expects the RelOptInfo's paths to still be sorted by cost). Perhaps > the way to do that is to skip generate_gather_paths() for the toplevel > scan/join node and do something similar later, after we know what > target list we want. > I think skipping a generation of gather paths for scan node or top level join node generated via standard_join_search seems straight forward, but skipping for paths generated via geqo seems to be tricky (See use of generate_gather_paths in merge_clump). Assuming, we find some way to skip it for top level scan/join node, I don't think that will be sufficient, we have some special way to push target list below Gather node in apply_projection_to_path, we need to move that part as well in generate_gather_paths. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapilawrote: > Right. > > I see two ways to include the cost of the target list for parallel > paths before rejecting them (a) Don't reject parallel paths > (Gather/GatherMerge) during add_path. This has the danger of path > explosion. (b) In the case of parallel paths, somehow try to identify > that path has a costly target list (maybe just check if the target > list has anything other than vars) and use it as a heuristic to decide > that whether a parallel path can be retained. I think the right approach to this problem is to get the cost of the GatherPath correct when it's initially created. The proposed patch changes the cost after-the-fact, but that (1) doesn't prevent a promising path from being rejected before we reach this point and (2) is probably unsafe, because it might confuse code that reaches the modified-in-place path through some other pointer (e.g. code which expects the RelOptInfo's paths to still be sorted by cost). Perhaps the way to do that is to skip generate_gather_paths() for the toplevel scan/join node and do something similar later, after we know what target list we want. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Aug 2, 2017 at 11:12 PM, Jeff Janeswrote: > On Wed, Jul 12, 2017 at 7:08 PM, Amit Kapila > wrote: >> >> On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes wrote: >> > On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapila >> > wrote: >> >> >> >> On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes >> >> wrote: >> >> > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar >> >> > wrote: >> >> >> >> >> >> So because of this high projection cost the seqpath and parallel >> >> >> path >> >> >> both have fuzzily same cost but seqpath is winning because it's >> >> >> parallel safe. >> >> > >> >> > >> >> > I think you are correct. However, unless parallel_tuple_cost is set >> >> > very >> >> > low, apply_projection_to_path never gets called with the Gather path >> >> > as >> >> > an >> >> > argument. It gets ruled out at some earlier stage, presumably >> >> > because >> >> > it >> >> > assumes the projection step cannot make it win if it is already >> >> > behind >> >> > by >> >> > enough. >> >> > >> >> >> >> I think that is genuine because tuple communication cost is very high. >> > >> > >> > Sorry, I don't know which you think is genuine, the early pruning or my >> > complaint about the early pruning. >> > >> >> Early pruning. See, currently, we don't have a way to maintain both >> parallel and non-parallel paths till later stage and then decide which >> one is better. If we want to maintain both parallel and non-parallel >> paths, it can increase planning cost substantially in the case of >> joins. Now, surely it can have benefit in many cases, so it is a >> worthwhile direction to pursue. > > > If I understand it correctly, we have a way, it just can lead to exponential > explosion problem, so we are afraid to use it, correct? If I just > lobotomize the path domination code (make pathnode.c line 466 always test > false) > > if (JJ_all_paths==0 && costcmp != COSTS_DIFFERENT) > > Then it keeps the parallel plan and later chooses to use it (after applying > your other patch in this thread) as the overall best plan. It even doesn't > slow down "make installcheck-parallel" by very much, which I guess just > means the regression tests don't have a lot of complex joins. > > But what is an acceptable solution? Is there a heuristic for when retaining > a parallel path could be helpful, the same way there is for fast-start > paths? It seems like the best thing would be to include the evaluation > costs in the first place at this step. > > Why is the path-cost domination code run before the cost of the function > evaluation is included? Because the function evaluation is part of target list and we create path target after the creation of base paths (See call to create_pathtarget @ planner.c:1696). > Is that because the information needed to compute > it is not available at that point, Right. I see two ways to include the cost of the target list for parallel paths before rejecting them (a) Don't reject parallel paths (Gather/GatherMerge) during add_path. This has the danger of path explosion. (b) In the case of parallel paths, somehow try to identify that path has a costly target list (maybe just check if the target list has anything other than vars) and use it as a heuristic to decide that whether a parallel path can be retained. I think the preference will be to do something on the lines of approach (b), but I am not sure whether we can easily do that. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Jul 12, 2017 at 7:08 PM, Amit Kapilawrote: > On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes wrote: > > On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapila > > wrote: > >> > >> On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes > wrote: > >> > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar > >> > wrote: > >> >> > >> >> So because of this high projection cost the seqpath and parallel path > >> >> both have fuzzily same cost but seqpath is winning because it's > >> >> parallel safe. > >> > > >> > > >> > I think you are correct. However, unless parallel_tuple_cost is set > >> > very > >> > low, apply_projection_to_path never gets called with the Gather path > as > >> > an > >> > argument. It gets ruled out at some earlier stage, presumably because > >> > it > >> > assumes the projection step cannot make it win if it is already behind > >> > by > >> > enough. > >> > > >> > >> I think that is genuine because tuple communication cost is very high. > > > > > > Sorry, I don't know which you think is genuine, the early pruning or my > > complaint about the early pruning. > > > > Early pruning. See, currently, we don't have a way to maintain both > parallel and non-parallel paths till later stage and then decide which > one is better. If we want to maintain both parallel and non-parallel > paths, it can increase planning cost substantially in the case of > joins. Now, surely it can have benefit in many cases, so it is a > worthwhile direction to pursue. > If I understand it correctly, we have a way, it just can lead to exponential explosion problem, so we are afraid to use it, correct? If I just lobotomize the path domination code (make pathnode.c line 466 always test false) if (JJ_all_paths==0 && costcmp != COSTS_DIFFERENT) Then it keeps the parallel plan and later chooses to use it (after applying your other patch in this thread) as the overall best plan. It even doesn't slow down "make installcheck-parallel" by very much, which I guess just means the regression tests don't have a lot of complex joins. But what is an acceptable solution? Is there a heuristic for when retaining a parallel path could be helpful, the same way there is for fast-start paths? It seems like the best thing would be to include the evaluation costs in the first place at this step. Why is the path-cost domination code run before the cost of the function evaluation is included? Is that because the information needed to compute it is not available at that point, or because it would be too slow to include it at that point? Or just because no one thought it important to do? Cheers, Jeff
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Jul 24, 2017 at 9:21 PM, Jeff Janeswrote: > On Sat, Jul 22, 2017 at 8:53 PM, Amit Kapila > wrote: >> >> On Thu, Jul 13, 2017 at 7:38 AM, Amit Kapila >> wrote: >> > On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes >> > wrote: >> >> >> >> >> >> >> >> Setting parallel_workers to 8 changes the threshold for the parallel to >> >> even >> >> be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076. So it >> >> is >> >> going in the correct direction, but not by enough to matter. >> >> >> > >> > You might want to play with cpu_tuple_cost and or seq_page_cost. >> > >> >> I don't know whether the patch will completely solve your problem, but >> this seems to be the right thing to do. Do you think we should stick >> this for next CF? > > > It doesn't solve the problem for me, but I agree it is an improvement we > should commit. > Okay, added the patch for next CF. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Sat, Jul 22, 2017 at 8:53 PM, Amit Kapilawrote: > On Thu, Jul 13, 2017 at 7:38 AM, Amit Kapila > wrote: > > On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes > wrote: > >> > >> > >> > >> Setting parallel_workers to 8 changes the threshold for the parallel to > even > >> be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076. So it is > >> going in the correct direction, but not by enough to matter. > >> > > > > You might want to play with cpu_tuple_cost and or seq_page_cost. > > > > I don't know whether the patch will completely solve your problem, but > this seems to be the right thing to do. Do you think we should stick > this for next CF? > It doesn't solve the problem for me, but I agree it is an improvement we should commit. Cheers, Jeff
Re: [HACKERS] why not parallel seq scan for slow functions
On Thu, Jul 13, 2017 at 7:38 AM, Amit Kapilawrote: > On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janes wrote: >> >> >> >> Setting parallel_workers to 8 changes the threshold for the parallel to even >> be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076. So it is >> going in the correct direction, but not by enough to matter. >> > > You might want to play with cpu_tuple_cost and or seq_page_cost. > I don't know whether the patch will completely solve your problem, but this seems to be the right thing to do. Do you think we should stick this for next CF? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Jul 12, 2017 at 11:20 PM, Jeff Janeswrote: > On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapila > wrote: >> >> On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes wrote: >> > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar >> > wrote: >> >> >> >> So because of this high projection cost the seqpath and parallel path >> >> both have fuzzily same cost but seqpath is winning because it's >> >> parallel safe. >> > >> > >> > I think you are correct. However, unless parallel_tuple_cost is set >> > very >> > low, apply_projection_to_path never gets called with the Gather path as >> > an >> > argument. It gets ruled out at some earlier stage, presumably because >> > it >> > assumes the projection step cannot make it win if it is already behind >> > by >> > enough. >> > >> >> I think that is genuine because tuple communication cost is very high. > > > Sorry, I don't know which you think is genuine, the early pruning or my > complaint about the early pruning. > Early pruning. See, currently, we don't have a way to maintain both parallel and non-parallel paths till later stage and then decide which one is better. If we want to maintain both parallel and non-parallel paths, it can increase planning cost substantially in the case of joins. Now, surely it can have benefit in many cases, so it is a worthwhile direction to pursue. > I agree that the communication cost is high, which is why I don't want to > have to set parellel_tuple_cost very low. For example, to get the benefit > of your patch, I have to set parellel_tuple_cost to 0.0049 or less (in my > real-world case, not the dummy test case I posted, although the number are > around the same for that one too). But with a setting that low, all kinds > of other things also start using parallel plans, even if they don't benefit > from them and are harmed. > > I realize we need to do some aggressive pruning to avoid an exponential > explosion in planning time, but in this case it has some rather unfortunate > consequences. I wanted to explore it, but I can't figure out where this > particular pruning is taking place. > > By the time we get to planner.c line 1787, current_rel->pathlist already > does not contain the parallel plan if parellel_tuple_cost >= 0.0050, so the > pruning is happening earlier than that. > Check generate_gather_paths. > >> >> If your table is reasonable large then you might want to try by >> increasing parallel workers (Alter Table ... Set (parallel_workers = >> ..)) > > > > Setting parallel_workers to 8 changes the threshold for the parallel to even > be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076. So it is > going in the correct direction, but not by enough to matter. > You might want to play with cpu_tuple_cost and or seq_page_cost. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Jul 11, 2017 at 10:25 PM, Amit Kapilawrote: > On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes wrote: > > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar > wrote: > >> > >> So because of this high projection cost the seqpath and parallel path > >> both have fuzzily same cost but seqpath is winning because it's > >> parallel safe. > > > > > > I think you are correct. However, unless parallel_tuple_cost is set very > > low, apply_projection_to_path never gets called with the Gather path as > an > > argument. It gets ruled out at some earlier stage, presumably because it > > assumes the projection step cannot make it win if it is already behind by > > enough. > > > > I think that is genuine because tuple communication cost is very high. > Sorry, I don't know which you think is genuine, the early pruning or my complaint about the early pruning. I agree that the communication cost is high, which is why I don't want to have to set parellel_tuple_cost very low. For example, to get the benefit of your patch, I have to set parellel_tuple_cost to 0.0049 or less (in my real-world case, not the dummy test case I posted, although the number are around the same for that one too). But with a setting that low, all kinds of other things also start using parallel plans, even if they don't benefit from them and are harmed. I realize we need to do some aggressive pruning to avoid an exponential explosion in planning time, but in this case it has some rather unfortunate consequences. I wanted to explore it, but I can't figure out where this particular pruning is taking place. By the time we get to planner.c line 1787, current_rel->pathlist already does not contain the parallel plan if parellel_tuple_cost >= 0.0050, so the pruning is happening earlier than that. > If your table is reasonable large then you might want to try by > increasing parallel workers (Alter Table ... Set (parallel_workers = > ..)) > Setting parallel_workers to 8 changes the threshold for the parallel to even be considered from parellel_tuple_cost <= 0.0049 to <= 0.0076. So it is going in the correct direction, but not by enough to matter. Cheers, Jeff
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Jul 12, 2017 at 10:55 AM, Amit Kapilawrote: > On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janes wrote: >> On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar wrote: >>> >>> So because of this high projection cost the seqpath and parallel path >>> both have fuzzily same cost but seqpath is winning because it's >>> parallel safe. >> >> >> I think you are correct. However, unless parallel_tuple_cost is set very >> low, apply_projection_to_path never gets called with the Gather path as an >> argument. It gets ruled out at some earlier stage, presumably because it >> assumes the projection step cannot make it win if it is already behind by >> enough. >> > > I think that is genuine because tuple communication cost is very high. > If your table is reasonable large then you might want to try by > increasing parallel workers (Alter Table ... Set (parallel_workers = > ..)) > >> So the attached patch improves things, but doesn't go far enough. >> > > It seems to that we need to adjust the cost based on if the below node > is projection capable. See attached. Patch looks good to me. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Wed, Jul 12, 2017 at 1:50 AM, Jeff Janeswrote: > On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumar wrote: >> >> So because of this high projection cost the seqpath and parallel path >> both have fuzzily same cost but seqpath is winning because it's >> parallel safe. > > > I think you are correct. However, unless parallel_tuple_cost is set very > low, apply_projection_to_path never gets called with the Gather path as an > argument. It gets ruled out at some earlier stage, presumably because it > assumes the projection step cannot make it win if it is already behind by > enough. > I think that is genuine because tuple communication cost is very high. If your table is reasonable large then you might want to try by increasing parallel workers (Alter Table ... Set (parallel_workers = ..)) > So the attached patch improves things, but doesn't go far enough. > It seems to that we need to adjust the cost based on if the below node is projection capable. See attached. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com subpath_projection_cost.2.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Mon, Jul 10, 2017 at 9:51 PM, Dilip Kumarwrote: > > > In below function, we always multiply the target->cost.per_tuple with > path->rows, but in case of gather it should multiply this with > subpath->rows > > apply_projection_to_path() > > > path->startup_cost += target->cost.startup - oldcost.startup; > path->total_cost += target->cost.startup - oldcost.startup + > (target->cost.per_tuple - oldcost.per_tuple) * path->rows; > > > So because of this high projection cost the seqpath and parallel path > both have fuzzily same cost but seqpath is winning because it's > parallel safe. > I think you are correct. However, unless parallel_tuple_cost is set very low, apply_projection_to_path never gets called with the Gather path as an argument. It gets ruled out at some earlier stage, presumably because it assumes the projection step cannot make it win if it is already behind by enough. So the attached patch improves things, but doesn't go far enough. Cheers, Jeff subpath_projection_cost.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] why not parallel seq scan for slow functions
On Tue, Jul 11, 2017 at 9:02 AM, Jeff Janeswrote: > If I have a slow function which is evaluated in a simple seq scan, I do not > get parallel execution, even though it would be massively useful. Unless > force_parallel_mode=ON, then I get a dummy parallel plan with one worker. > > explain select aid,slow(abalance) from pgbench_accounts; After analysing this, I see multiple reasons of this getting not selected 1. The query is selecting all the tuple and the benefit what we are getting by parallelism is by dividing cpu_tuple_cost which is 0.01 but for each tuple sent from worker to gather there is parallel_tuple_cost which is 0.1 for each tuple. (which will be very less in case of aggregate). Maybe you can try some selecting with some condition. like below: postgres=# explain select slow(abalance) from pgbench_accounts where abalance > 1; QUERY PLAN --- Gather (cost=0.00..46602.33 rows=1 width=4) Workers Planned: 2 -> Parallel Seq Scan on pgbench_accounts (cost=0.00..46602.33 rows=1 width=4) Filter: (abalance > 1) 2. The second problem I am seeing is that (maybe the code problem), the "slow" function is very costly (1000) and in apply_projection_to_path we account for this cost. But, I have noticed that for gather node also we are adding this cost to all the rows but actually, if the lower node is already doing the projection then gather node just need to send out the tuple instead of actually applying the projection. In below function, we always multiply the target->cost.per_tuple with path->rows, but in case of gather it should multiply this with subpath->rows apply_projection_to_path() path->startup_cost += target->cost.startup - oldcost.startup; path->total_cost += target->cost.startup - oldcost.startup + (target->cost.per_tuple - oldcost.per_tuple) * path->rows; So because of this high projection cost the seqpath and parallel path both have fuzzily same cost but seqpath is winning because it's parallel safe. > > CREATE OR REPLACE FUNCTION slow(integer) > RETURNS integer > LANGUAGE plperl > IMMUTABLE PARALLEL SAFE STRICT COST 1000 > AS $function$ > my $thing=$_[0]; > foreach (1..1_000_000) { > $thing = sqrt($thing); > $thing *= $thing; > }; > return ($thing+0); > $function$; > > The partial path is getting added to the list of paths, it is just not > getting chosen, even if parallel_*_cost are set to zero. Why not? > > If I do an aggregate, then it does use parallel workers: > > explain select sum(slow(abalance)) from pgbench_accounts; > > It doesn't use as many as I would like, because there is a limit based on > the logarithm of the table size (I'm using -s 10 and get 3 parallel > processes) , but at least I know how to start looking into that. > > Also, how do you debug stuff like this? Are there some gdb tricks to make > this easier to introspect into the plans? > > Cheers, > > Jeff -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] why not parallel seq scan for slow functions
If I have a slow function which is evaluated in a simple seq scan, I do not get parallel execution, even though it would be massively useful. Unless force_parallel_mode=ON, then I get a dummy parallel plan with one worker. explain select aid,slow(abalance) from pgbench_accounts; CREATE OR REPLACE FUNCTION slow(integer) RETURNS integer LANGUAGE plperl IMMUTABLE PARALLEL SAFE STRICT COST 1000 AS $function$ my $thing=$_[0]; foreach (1..1_000_000) { $thing = sqrt($thing); $thing *= $thing; }; return ($thing+0); $function$; The partial path is getting added to the list of paths, it is just not getting chosen, even if parallel_*_cost are set to zero. Why not? If I do an aggregate, then it does use parallel workers: explain select sum(slow(abalance)) from pgbench_accounts; It doesn't use as many as I would like, because there is a limit based on the logarithm of the table size (I'm using -s 10 and get 3 parallel processes) , but at least I know how to start looking into that. Also, how do you debug stuff like this? Are there some gdb tricks to make this easier to introspect into the plans? Cheers, Jeff