Thanks a lot Robert for the patch. I will have a look. Quickly tried
to test some aggregate queries with a partitioned pgbench_accounts
table, and it is crashing. Will get back with the fix, and any other
review comments.
Thanks
-Amit Khandekar
On 9 November 2017 at 23:44, Robert Haas wrote:
> O
On Sat, Oct 28, 2017 at 5:50 AM, Robert Haas wrote:
> No, because the Append node is *NOT* getting copied into shared
> memory. I have pushed a comment update to the existing functions; you
> can use the same comment for this patch.
I spent the last several days working on this patch, which had
On Thu, Oct 19, 2017 at 9:05 AM, Amit Khandekar wrote:
>> + *ExecAppendEstimate
>> + *
>> + *estimates the space required to serialize Append node.
>>
>> Ugh, this is wrong, but I notice it follows various other
>> equally-wrong comments for other parallel-aware node types. I guess
On 13 October 2017 at 00:29, Robert Haas wrote:
> On Wed, Oct 11, 2017 at 8:51 AM, Amit Khandekar
> wrote:
>> [ new patch ]
>
> + parallel_append
> + Waiting to choose the next subplan during Parallel Append
> plan
> + execution.
> +
> +
>
> Probably need
On Wed, Oct 11, 2017 at 8:51 AM, Amit Khandekar wrote:
> [ new patch ]
+ parallel_append
+ Waiting to choose the next subplan during Parallel Append plan
+ execution.
+
+
Probably needs to update a morerows values of some earlier entry.
+ enable_par
On 9 October 2017 at 16:03, Amit Kapila wrote:
> On Fri, Oct 6, 2017 at 12:03 PM, Amit Khandekar
> wrote:
>> On 6 October 2017 at 08:49, Amit Kapila wrote:
>>>
>>> Okay, but why not cheapest partial path?
>>
>> I gave some thought on this point. Overall I feel it does not matter
>> which partia
On Fri, Oct 6, 2017 at 12:03 PM, Amit Khandekar wrote:
> On 6 October 2017 at 08:49, Amit Kapila wrote:
>>
>> Okay, but why not cheapest partial path?
>
> I gave some thought on this point. Overall I feel it does not matter
> which partial path it should pick up. RIght now the partial paths are
>
On 6 October 2017 at 08:49, Amit Kapila wrote:
> On Thu, Oct 5, 2017 at 4:11 PM, Amit Khandekar wrote:
>>
>> Ok. How about removing pa_all_partial_subpaths altogether , and
>> instead of the below condition :
>>
>> /*
>> * If all the child rels have partial paths, and if the above Parallel
>> * A
On Thu, Oct 5, 2017 at 4:11 PM, Amit Khandekar wrote:
>
> Ok. How about removing pa_all_partial_subpaths altogether , and
> instead of the below condition :
>
> /*
> * If all the child rels have partial paths, and if the above Parallel
> * Append path has a mix of partial and non-partial subpaths,
On Thu, Oct 5, 2017 at 6:14 PM, Robert Haas wrote:
> On Thu, Oct 5, 2017 at 6:29 AM, Amit Kapila wrote:
>> Okay, but can't we try to pick the cheapest partial path for master
>> backend and maybe master backend can try to work on a partial path
>> which is already picked up by some worker.
>
> We
On Thu, Oct 5, 2017 at 6:29 AM, Amit Kapila wrote:
> Okay, but can't we try to pick the cheapest partial path for master
> backend and maybe master backend can try to work on a partial path
> which is already picked up by some worker.
Well, the master backend is typically going to be the first pr
On 30 September 2017 at 19:21, Amit Kapila wrote:
> On Wed, Sep 20, 2017 at 10:59 AM, Amit Khandekar
> wrote:
>> On 16 September 2017 at 10:42, Amit Kapila wrote:
>>>
>>> At a broader level, the idea is good, but I think it won't turn out
>>> exactly like that considering your below paragraph w
On Mon, Oct 2, 2017 at 6:21 PM, Robert Haas wrote:
> On Sun, Oct 1, 2017 at 9:55 AM, Amit Kapila wrote:
>> Isn't it for both? I mean it is about comparing the non-partial paths
>> for child relations of the same relation and also when there are
>> different relations involved as in Union All kin
On Sun, Oct 1, 2017 at 9:55 AM, Amit Kapila wrote:
> Isn't it for both? I mean it is about comparing the non-partial paths
> for child relations of the same relation and also when there are
> different relations involved as in Union All kind of query. In any
> case, the point I was trying to say
On Sat, Sep 30, 2017 at 9:25 PM, Robert Haas wrote:
> On Sat, Sep 30, 2017 at 12:20 AM, Amit Kapila wrote:
>> Okay, but the point is whether it will make any difference
>> practically. Let us try to see with an example, consider there are
>> two children (just taking two for simplicity, we can e
On Sat, Sep 30, 2017 at 12:20 AM, Amit Kapila wrote:
> Okay, but the point is whether it will make any difference
> practically. Let us try to see with an example, consider there are
> two children (just taking two for simplicity, we can extend it to
> many) and first having 1000 pages to scan an
On Wed, Sep 20, 2017 at 10:59 AM, Amit Khandekar wrote:
> On 16 September 2017 at 10:42, Amit Kapila wrote:
>>
>> At a broader level, the idea is good, but I think it won't turn out
>> exactly like that considering your below paragraph which indicates
>> that it is okay if leader picks a partial
On Sat, Sep 30, 2017 at 4:02 AM, Robert Haas wrote:
> On Fri, Sep 29, 2017 at 7:48 AM, Amit Kapila wrote:
>> I think in general the non-partial paths should be cheaper as compared
>> to partial paths as that is the reason planner choose not to make a
>> partial plan at first place. I think the id
On Fri, Sep 29, 2017 at 7:48 AM, Amit Kapila wrote:
> I think in general the non-partial paths should be cheaper as compared
> to partial paths as that is the reason planner choose not to make a
> partial plan at first place. I think the idea patch is using will help
> because the leader will choo
On Sat, Sep 16, 2017 at 2:15 AM, Amit Kapila wrote:
> Yes, we can do that and that is what I think is probably better. So,
> the question remains that in which case non-parallel-aware partial
> append will be required? Basically, it is not clear to me why after
> having parallel-aware partial ap
On Wed, Sep 20, 2017 at 10:59 AM, Amit Khandekar wrote:
> On 16 September 2017 at 10:42, Amit Kapila wrote:
>> On Thu, Sep 14, 2017 at 9:41 PM, Robert Haas wrote:
>>> On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila
>>> wrote:
I think the patch stores only non-partial paths in decreasing orde
On 20 September 2017 at 11:32, Amit Khandekar wrote:
> There is still the open point being
> discussed : whether to have non-parallel-aware partial Append path, or
> always have parallel-aware Append paths.
Attached is the revised patch v16. In previous versions, we used to
add a non-parallel-awa
On 11 September 2017 at 18:55, Amit Kapila wrote:
>>> 1.
>>> + else if (IsA(subpath, MergeAppendPath))
>>> + {
>>> + MergeAppendPath *mpath = (MergeAppendPath *) subpath;
>>> +
>>> + /*
>>> + * If at all MergeAppend is partial, all its child plans have to be
>>> + * partial : we don't currently su
On 16 September 2017 at 10:42, Amit Kapila wrote:
> On Thu, Sep 14, 2017 at 9:41 PM, Robert Haas wrote:
>> On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila wrote:
>>> I think the patch stores only non-partial paths in decreasing order,
>>> what if partial paths having more costs follows those paths?
On 16 September 2017 at 11:45, Amit Kapila wrote:
> On Thu, Sep 14, 2017 at 8:30 PM, Amit Khandekar
> wrote:
>> On 11 September 2017 at 18:55, Amit Kapila wrote:
>>>
>>> How? See, if you have four partial subpaths and two non-partial
>>> subpaths, then for parallel-aware append it conside
On Thu, Sep 14, 2017 at 8:30 PM, Amit Khandekar wrote:
> On 11 September 2017 at 18:55, Amit Kapila wrote:
>>>
>>
>> How? See, if you have four partial subpaths and two non-partial
>> subpaths, then for parallel-aware append it considers all six paths in
>> parallel path whereas for non-parallel
On Thu, Sep 14, 2017 at 9:41 PM, Robert Haas wrote:
> On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila wrote:
>> I think the patch stores only non-partial paths in decreasing order,
>> what if partial paths having more costs follows those paths?
>
> The general picture here is that we don't want the
On Mon, Sep 11, 2017 at 9:25 AM, Amit Kapila wrote:
> I think the patch stores only non-partial paths in decreasing order,
> what if partial paths having more costs follows those paths?
The general picture here is that we don't want the leader to get stuck
inside some long-running operation becau
On 11 September 2017 at 18:55, Amit Kapila wrote:
>>> Do you think non-parallel-aware Append
>>> will be better in any case when there is a parallel-aware append? I
>>> mean to say let's try to create non-parallel-aware append only when
>>> parallel-aware append is not possible.
>>
>> By non-para
On Mon, Sep 11, 2017 at 4:49 PM, Amit Khandekar wrote:
> On 8 September 2017 at 19:17, Amit Kapila wrote:
>>
>> In that case, why can't we keep the workers also process in same
>> order, what is the harm in that?
>
> Because of the way the logic of queuing works, the workers finish
> earlier if t
On 8 September 2017 at 19:17, Amit Kapila wrote:
> On Fri, Sep 8, 2017 at 3:59 PM, Amit Khandekar wrote:
>> On 7 September 2017 at 11:05, Amit Kapila wrote:
>>> On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar
>>> wrote:
>>> 3.
>>> +/* --
On Fri, Sep 8, 2017 at 3:59 PM, Amit Khandekar wrote:
> On 7 September 2017 at 11:05, Amit Kapila wrote:
>> On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar
>> wrote:
>> 3.
>> +/*
>> + * exec_append_leader_next
>> + *
>> + * To be
On Wed, Aug 30, 2017 at 5:32 PM, Amit Khandekar wrote:
> Hi Rafia,
>
> On 17 August 2017 at 14:12, Amit Khandekar wrote:
>> But for all of the cases here, partial
>> subplans seem possible, and so even on HEAD it executed Partial
>> Append. So between a Parallel Append having partial subplans and
On 7 September 2017 at 11:05, Amit Kapila wrote:
> On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar
> wrote:
>> The last updated patch needs a rebase. Attached is the rebased version.
>>
>
> Few comments on the first read of the patch:
Thanks !
>
> 1.
> @@ -279,6 +347,7 @@ void
> ExecReScanAp
On 7 September 2017 at 13:40, Rafia Sabih wrote:
> On Wed, Aug 30, 2017 at 5:32 PM, Amit Khandekar
> wrote:
>> Hi Rafia,
>>
>> On 17 August 2017 at 14:12, Amit Khandekar wrote:
>>> But for all of the cases here, partial
>>> subplans seem possible, and so even on HEAD it executed Partial
>>> App
On Wed, Aug 30, 2017 at 5:32 PM, Amit Khandekar wrote:
> Hi Rafia,
>
> On 17 August 2017 at 14:12, Amit Khandekar wrote:
>> But for all of the cases here, partial
>> subplans seem possible, and so even on HEAD it executed Partial
>> Append. So between a Parallel Append having partial subplans and
On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar wrote:
> The last updated patch needs a rebase. Attached is the rebased version.
>
Few comments on the first read of the patch:
1.
@@ -279,6 +347,7 @@ void
ExecReScanAppend(AppendState *node)
{
int i;
+ ParallelAppendDesc padesc = node->as_pad
On 30 August 2017 at 17:32, Amit Khandekar wrote:
> On 16 August 2017 at 18:34, Robert Haas wrote:
>> Thanks for the benchmarking results!
>>
>> On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
>> wrote:
>>> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
>>
>> 12 seconds instead of 244? Whoa.
The last updated patch needs a rebase. Attached is the rebased version.
Thanks
-Amit Khandekar
ParallelAppend_v13_rebased_3.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
Hi Rafia,
On 17 August 2017 at 14:12, Amit Khandekar wrote:
> But for all of the cases here, partial
> subplans seem possible, and so even on HEAD it executed Partial
> Append. So between a Parallel Append having partial subplans and a
> Partial Append having partial subplans , the cost differenc
On 16 August 2017 at 18:34, Robert Haas wrote:
> Thanks for the benchmarking results!
>
> On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
> wrote:
>> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
>
> 12 seconds instead of 244? Whoa. I find it curious that we picked a
> Parallel Append with a
Thanks for the benchmarking results!
On Tue, Aug 15, 2017 at 11:35 PM, Rafia Sabih
wrote:
> Q4 | 244 | 12 | PA and PWJ, time by only PWJ - 41
12 seconds instead of 244? Whoa. I find it curious that we picked a
Parallel Append with a bunch of non-partial plans when we could've
just as easily pi
On 9 August 2017 at 19:05, Robert Haas wrote:
> On Wed, Jul 5, 2017 at 7:53 AM, Amit Khandekar wrote:
>>> This is not applicable on the latest head i.e. commit --
>>> 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.
>>
>> Thanks for notifying. Attached is the rebased version
On Wed, Jul 5, 2017 at 7:53 AM, Amit Khandekar wrote:
>> This is not applicable on the latest head i.e. commit --
>> 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.
>
> Thanks for notifying. Attached is the rebased version of the patch.
This again needs a rebase.
(And, hey
On 30 June 2017 at 15:10, Rafia Sabih wrote:
>
> On Tue, Apr 4, 2017 at 12:37 PM, Amit Khandekar
> wrote:
>>
>> Attached is an updated patch v13 that has some comments changed as per
>> your review, and also rebased on latest master.
>
>
> This is not applicable on the latest head i.e. commit --
On Tue, Apr 4, 2017 at 12:37 PM, Amit Khandekar
wrote:
> Attached is an updated patch v13 that has some comments changed as per
> your review, and also rebased on latest master.
>
This is not applicable on the latest head i.e. commit
-- 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a
On 7 April 2017 at 20:35, Andres Freund wrote:
>> But for costs such as (4, 4, 4, 20 times), the logic would give
>> us 20 workers because we want to finish the Append in 4 time units;
>> and this what we want to avoid when we go with
>> don't-allocate-too-many-workers approach.
>
> I guess,
Hi,
On 2017-04-07 11:44:39 +0530, Amit Khandekar wrote:
> On 6 April 2017 at 07:33, Andres Freund wrote:
> > On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
> >> This is what the earlier versions of my patch had done : just add up
> >> per-subplan parallel_workers (1 for non-partial subplan a
On 6 April 2017 at 07:33, Andres Freund wrote:
> On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
>> This is what the earlier versions of my patch had done : just add up
>> per-subplan parallel_workers (1 for non-partial subplan and
>> subpath->parallel_workers for partial subplans) and set thi
On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
> This is what the earlier versions of my patch had done : just add up
> per-subplan parallel_workers (1 for non-partial subplan and
> subpath->parallel_workers for partial subplans) and set this total as
> the Append parallel_workers.
I don't th
On Tue, Apr 4, 2017 at 4:13 PM, Andres Freund wrote:
> I'm quite unconvinced that just throwing a log() in there is the best
> way to combat that. Modeling the issue of starting more workers through
> tuple transfer, locking, startup overhead costing seems a better to me.
Knock yourself out. Th
On 5 April 2017 at 01:43, Andres Freund wrote:
> On 2017-04-04 08:01:32 -0400, Robert Haas wrote:
>> On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund wrote:
>> > I don't think the parallel seqscan is comparable in complexity with the
>> > parallel append case. Each worker there does the same kind
On Wed, Apr 5, 2017 at 1:43 AM, Andres Freund wrote:
> On 2017-04-04 08:01:32 -0400, Robert Haas wrote:
> > On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund
> wrote:
> > > I don't think the parallel seqscan is comparable in complexity with the
> > > parallel append case. Each worker there does th
On 2017-04-04 08:01:32 -0400, Robert Haas wrote:
> On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund wrote:
> > I don't think the parallel seqscan is comparable in complexity with the
> > parallel append case. Each worker there does the same kind of work, and
> > if one of them is behind, it'll just
On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund wrote:
> I'm afraid this is too late for v10 - do you agree?
Yeah, I think so. The benefit of this will be a lot more once we get
partitionwise join and partitionwise aggregate working, but that
probably won't happen for this release, or at best in l
On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund wrote:
> I don't think the parallel seqscan is comparable in complexity with the
> parallel append case. Each worker there does the same kind of work, and
> if one of them is behind, it'll just do less. But correct sizing will
> be more important wi
On 4 April 2017 at 01:47, Andres Freund wrote:
>> +typedef struct ParallelAppendDescData
>> +{
>> + LWLock pa_lock;/* mutual exclusion to choose
>> next subplan */
>> + int pa_first_plan; /* plan to choose while
>> wrapping around plans */
>>
Thanks Andres for your review comments. Will get back with the other
comments, but meanwhile some queries about the below particular
comment ...
On 4 April 2017 at 10:17, Andres Freund wrote:
> On 2017-04-03 22:13:18 -0400, Robert Haas wrote:
>> On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund wrot
On 2017-04-03 22:13:18 -0400, Robert Haas wrote:
> On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund wrote:
> > Hm. I'm not really convinced by the logic here. Wouldn't it be better
> > to try to compute the minimum total cost across all workers for
> > 1..#max_workers for the plans in an iterative
On Mon, Apr 3, 2017 at 4:17 PM, Andres Freund wrote:
> Hm. I'm not really convinced by the logic here. Wouldn't it be better
> to try to compute the minimum total cost across all workers for
> 1..#max_workers for the plans in an iterative manner? I.e. try to map
> each of the subplans to 1 (if
Hi,
On 2017-03-24 21:32:57 +0530, Amit Khandekar wrote:
> diff --git a/src/backend/executor/nodeAppend.c
> b/src/backend/executor/nodeAppend.c
> index a107545..e9e8676 100644
> --- a/src/backend/executor/nodeAppend.c
> +++ b/src/backend/executor/nodeAppend.c
> @@ -59,9 +59,47 @@
>
> #include
On 24 March 2017 at 00:38, Amit Khandekar wrote:
> On 23 March 2017 at 16:26, Amit Khandekar wrote:
>> On 23 March 2017 at 05:55, Robert Haas wrote:
>>>
>>> So in your example we do this:
>>>
>>> C[0] += 20;
>>> C[1] += 16;
>>> C[2] += 10;
>>> /* C[2] is smaller than C[0] or C[1] at this point,
On 24 March 2017 at 13:11, Rajkumar Raghuwanshi
wrote:
> I have given patch on latest pg sources (on commit
> 457a4448732881b5008f7a3bcca76fc299075ac3). configure and make all
> install ran successfully, but initdb failed with below error.
> FailedAssertion("!(LWLockTranchesAllocated >=
> LWTRANC
On Fri, Mar 24, 2017 at 12:38 AM, Amit Khandekar wrote:
> Meanwhile, attached is a WIP patch v10. The only change in this patch
> w.r.t. the last patch (v9) is that this one has a new function defined
> append_nonpartial_cost(). Just sending this to show how the algorithm
> looks like; haven't yet
On 23 March 2017 at 16:26, Amit Khandekar wrote:
> On 23 March 2017 at 05:55, Robert Haas wrote:
>> On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar
wrote:
>>> Attached is the updated patch that handles the changes for all the
>>> comments except the cost changes part. Details about the specific
On 23 March 2017 at 05:55, Robert Haas wrote:
> On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar
> wrote:
>> Attached is the updated patch that handles the changes for all the
>> comments except the cost changes part. Details about the specific
>> changes are after the cost-related points discuss
On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar wrote:
> Attached is the updated patch that handles the changes for all the
> comments except the cost changes part. Details about the specific
> changes are after the cost-related points discussed below.
>
> For non-partial paths, I was checking fol
Attached is the updated patch that handles the changes for all the
comments except the cost changes part. Details about the specific
changes are after the cost-related points discussed below.
>> I wanted to take into account perĀsubpath parallel_workers for total
>> cost of Append. Suppose the par
On Fri, Mar 17, 2017 at 1:12 PM, Amit Khandekar wrote:
>> - The substantive changes in add_paths_to_append_rel don't look right
>> either. It's not clear why accumulate_partialappend_subpath is
>> getting called even in the non-enable_parallelappend case. I don't
>> think the logic for the case
>> 2. Next, estimate the cost of the non-partial paths. To do this, make
>> an array of Cost of that length and initialize all the elements to
>> zero, then add the total cost of each non-partial plan in turn to the
>> element of the array with the smallest cost, and then take the maximum
>> of th
On Fri, Mar 17, 2017 at 10:12 AM, Amit Khandekar wrote:
> Yeah, I was in double minds as to whether to do the
> copy-to-array-and-qsort thing, or should just write the same number of
> lines of code to manually do an insertion sort. Actually I was
> searching if we already have a linked list sort,
On 16 March 2017 at 18:18, Ashutosh Bapat
wrote:
> + * Check if we are already finished plans from parallel append. This
> + * can happen if all the subplans are finished when this worker
> + * has not even started returning tuples.
> + */
> +if (node->as_pa
On 17 March 2017 at 01:37, Robert Haas wrote:
> - You've added a GUC (which is good) but not documented it (which is
> bad) or added it to postgresql.conf.sample (also bad).
>
> - You've used a loop inside a spinlock-protected critical section,
> which is against project policy. Use an LWLock; de
On Thu, Mar 16, 2017 at 6:27 AM, Amit Khandekar wrote:
> Attached is an updated patch v7, which does the above.
Some comments:
- You've added a GUC (which is good) but not documented it (which is
bad) or added it to postgresql.conf.sample (also bad).
- You've used a loop inside a spinlock-prote
On Thu, Mar 16, 2017 at 8:48 AM, Ashutosh Bapat
wrote:
> Why do we need following code in both ExecAppendInitializeWorker() and
> ExecAppendInitializeDSM()? Both of those things happen before starting the
> actual execution, so one of those should suffice?
> +/* Choose the optimal subplan to b
On Thu, Mar 16, 2017 at 3:57 PM, Amit Khandekar wrote:
> On 12 March 2017 at 08:50, Robert Haas wrote:
>>> However, Ashutosh's response made me think of something: one thing is
>>> that we probably do want to group all of the non-partial plans at the
>>> beginning of the Append so that they get w
On 12 March 2017 at 08:50, Robert Haas wrote:
>> However, Ashutosh's response made me think of something: one thing is
>> that we probably do want to group all of the non-partial plans at the
>> beginning of the Append so that they get workers first, and put the
>> partial plans afterward. That's
On Mon, Mar 13, 2017 at 7:46 AM, Robert Haas wrote:
> On Mon, Mar 13, 2017 at 4:59 AM, Amit Khandekar
> wrote:
>> I agree that we should preferably have the non-partial plans started
>> first. But I am not sure if it is really worth ordering the partial
>> plans by cost. The reason we ended up n
On Mon, Mar 13, 2017 at 4:59 AM, Amit Khandekar wrote:
> I agree that we should preferably have the non-partial plans started
> first. But I am not sure if it is really worth ordering the partial
> plans by cost. The reason we ended up not keeping track of the
> per-subplan parallel_worker, is bec
On 12 March 2017 at 19:31, Tels wrote:
> Moin,
>
> On Sat, March 11, 2017 11:29 pm, Robert Haas wrote:
>> On Fri, Mar 10, 2017 at 6:01 AM, Tels
>> wrote:
>>> Just a question for me to understand the implementation details vs. the
>>> strategy:
>>>
>>> Have you considered how the scheduling decisi
On 10 March 2017 at 22:08, Robert Haas wrote:
> On Fri, Mar 10, 2017 at 12:17 AM, Amit Khandekar
> wrote:
>> I agree that the two-lists approach will consume less memory than
>> bitmapset. Keeping two lists will effectively have an extra pointer
>> field which will add up to the AppendPath size,
Moin,
On Sat, March 11, 2017 11:29 pm, Robert Haas wrote:
> On Fri, Mar 10, 2017 at 6:01 AM, Tels
> wrote:
>> Just a question for me to understand the implementation details vs. the
>> strategy:
>>
>> Have you considered how the scheduling decision might impact performance
>> due to "inter-plan p
On Fri, Mar 10, 2017 at 6:01 AM, Tels wrote:
> Just a question for me to understand the implementation details vs. the
> strategy:
>
> Have you considered how the scheduling decision might impact performance
> due to "inter-plan parallelism vs. in-plan parallelism"?
>
> So what would be the schedu
On Fri, Mar 10, 2017 at 8:12 AM, Amit Khandekar wrote:
>> +static inline void
>> +exec_append_scan_first(AppendState *appendstate)
>> +{
>> +appendstate->as_whichplan = 0;
>> +}
>>
>> I don't think this is buying you anything, and suggest backing it out.
>
> This is required for sequential App
On Fri, Mar 10, 2017 at 12:17 AM, Amit Khandekar wrote:
> I agree that the two-lists approach will consume less memory than
> bitmapset. Keeping two lists will effectively have an extra pointer
> field which will add up to the AppendPath size, but this size will not
> grow with the number of subpa
After giving more thought to our discussions, I have have used the
Bitmapset structure in AppendPath as against having two lists one for
partial and other for non-partial paths. Attached is the patch v6 that
has the required changes. So accumulate_append_subpath() now also
prepares the bitmapset co
Moin,
On Fri, March 10, 2017 3:24 am, Amit Khandekar wrote:
> On 10 March 2017 at 12:33, Ashutosh Bapat
> wrote:
>> On Fri, Mar 10, 2017 at 11:33 AM, Ashutosh Bapat
>> wrote:
But as far as code is concerned, I think the two-list approach will
turn out to be less simple if we deriv
On 10 March 2017 at 14:05, Ashutosh Bapat
wrote:
>> The need for
>> num_workers=-1 will still be there for partial plans, because we need
>> to set it to -1 once a worker finishes a plan.
>>
>
> IIRC, we do that so that no other workers are assigned to it when
> scanning the array of plans. But wi
>>>
>>> I think there is some merit in separating out non-parallel and
>>> parallel plans within the same array or outside it. The current logic
>>> to assign plan to a worker looks at all the plans, unnecessarily
>>> hopping over the un-parallel ones after they are given to a worker. If
>>> w
On 10 March 2017 at 12:33, Ashutosh Bapat
wrote:
> On Fri, Mar 10, 2017 at 11:33 AM, Ashutosh Bapat
> wrote:
>>>
>>> But as far as code is concerned, I think the two-list approach will
>>> turn out to be less simple if we derive corresponding two different
>>> arrays in AppendState node. Handling
On Fri, Mar 10, 2017 at 11:33 AM, Ashutosh Bapat
wrote:
>>
>> But as far as code is concerned, I think the two-list approach will
>> turn out to be less simple if we derive corresponding two different
>> arrays in AppendState node. Handling two different arrays during
>> execution does not look cl
>
> But as far as code is concerned, I think the two-list approach will
> turn out to be less simple if we derive corresponding two different
> arrays in AppendState node. Handling two different arrays during
> execution does not look clean. Whereas, the bitmapset that I have used
> in Append has t
On 10 March 2017 at 10:13, Ashutosh Bapat
wrote:
> On Thu, Mar 9, 2017 at 6:28 PM, Robert Haas wrote:
>> On Thu, Mar 9, 2017 at 7:42 AM, Ashutosh Bapat
>> wrote:
+if (rel->partial_pathlist != NIL &&
+(Path *) linitial(rel->partial_pathlist) == subpath)
+
On Thu, Mar 9, 2017 at 6:28 PM, Robert Haas wrote:
> On Thu, Mar 9, 2017 at 7:42 AM, Ashutosh Bapat
> wrote:
>>>
>>> +if (rel->partial_pathlist != NIL &&
>>> +(Path *) linitial(rel->partial_pathlist) == subpath)
>>> +partial_subplans_set = bms_add_member(partial_su
On Thu, Mar 9, 2017 at 7:42 AM, Ashutosh Bapat
wrote:
>>
>> +if (rel->partial_pathlist != NIL &&
>> +(Path *) linitial(rel->partial_pathlist) == subpath)
>> +partial_subplans_set = bms_add_member(partial_subplans_set, i);
>>
>> This seems like a scary way to figure
>
> +if (rel->partial_pathlist != NIL &&
> +(Path *) linitial(rel->partial_pathlist) == subpath)
> +partial_subplans_set = bms_add_member(partial_subplans_set, i);
>
> This seems like a scary way to figure this out. What if we wanted to
> build a parallel append sub
On Wed, Mar 8, 2017 at 2:00 AM, Amit Khandekar wrote:
> Yeah, that seems to be right in most of the cases. The only cases
> where your formula seems to give too few workers is for something like
> : (2, 8, 8). For such subplans, we should at least allocate 8 workers.
> It turns out that in most of
On 19 February 2017 at 14:59, Robert Haas wrote:
> On Fri, Feb 17, 2017 at 2:56 PM, Amit Khandekar
> wrote:
>> The log2(num_children)+1 formula which you proposed does not take into
>> account the number of workers for each of the subplans, that's why I
>> am a bit more inclined to look for some
On Mon, Feb 20, 2017 at 10:54 AM, Ashutosh Bapat
wrote:
> On Sun, Feb 19, 2017 at 2:33 PM, Robert Haas wrote:
>> On Fri, Feb 17, 2017 at 11:44 AM, Ashutosh Bapat
>> wrote:
>>> That's true for a partitioned table, but not necessarily for every
>>> append relation. Amit's patch is generic for all
On Sun, Feb 19, 2017 at 2:33 PM, Robert Haas wrote:
> On Fri, Feb 17, 2017 at 11:44 AM, Ashutosh Bapat
> wrote:
>> That's true for a partitioned table, but not necessarily for every
>> append relation. Amit's patch is generic for all append relations. If
>> the child plans are joins or subquery s
1 - 100 of 124 matches
Mail list logo